Instructions to use mascIT/bert-tiny-ita-lemma-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mascIT/bert-tiny-ita-lemma-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="mascIT/bert-tiny-ita-lemma-classification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("mascIT/bert-tiny-ita-lemma-classification") model = AutoModelForSequenceClassification.from_pretrained("mascIT/bert-tiny-ita-lemma-classification") - Notebooks
- Google Colab
- Kaggle
bert-tiny-ita-lemma-classification is a text-classification model (based on bert-tiny-ita) which has been finetuned for a lemma classification task, using a private dataset of high quality dictionary data.
It can classify an italian lemma in the following classes:
- 'AGG': 0,
- 'VERBO_INTRANSITIVO': 1,
- 'VERBO_TRANSITIVO': 2,
- 'SOST_MASCHILE': 3,
- 'SOST_FEMMINILE': 4,
- 'AVVERBIO': 5,
- 'AGG_SOSTANTIVATO': 6
The project is still a work in progress, new versions will come with time.
Training
- epochs: 5
- lr: 1e-4
- optim: AdamW
- weight_decay: 1e-2
Eval
- Downloads last month
- 3