masakhane/masakhaner
Updated • 586 • 9
How to use arnolfokam/roberta-base-pcm with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="arnolfokam/roberta-base-pcm") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm")roberta-base-pcm is a model based on the fine-tuned RoBERTa base model. It has been trained to recognize four types of entities:
This model was fine-tuned on the Nigerian Pidgin corpus (pcm) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
This model was trained on a single NVIDIA P5000 from Paperspace
We evaluated this model on the test split of the Swahili corpus (pcm) present in the MasakhaNER with no thresholding.
| Model Name | Precision | Recall | F1-score |
|---|---|---|---|
| roberta-base-pcm | 88.55 | 82.45 | 85.39 |
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/roberta-base-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/roberta-base-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)