Instructions to use wukevin/tcr-bert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use wukevin/tcr-bert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="wukevin/tcr-bert")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("wukevin/tcr-bert") model = AutoModelForSequenceClassification.from_pretrained("wukevin/tcr-bert") - Inference
- Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
TCR transformer model
See our full codebase and our preprint for more information.
This model is on:
- Masked language modeling (masked amino acid or MAA modeling)
- Classification across antigen labels from PIRD
If you are looking for a model trained only on MAA, please see our other model.
Example inputs:
C A S S P V T G G I Y G Y T F(binds to NLVPMVATV CMV antigen)C A T S G R A G V E Q F F(binds to GILGFVFTL flu antigen)
- Downloads last month
- 1,416