Instructions to use LACAI/roberta-base-PFG-progression with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LACAI/roberta-base-PFG-progression with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="LACAI/roberta-base-PFG-progression")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LACAI/roberta-base-PFG-progression") model = AutoModelForSequenceClassification.from_pretrained("LACAI/roberta-base-PFG-progression") - Notebooks
- Google Colab
- Kaggle
Base model: roberta-base
Fine tuned as a progression model (to predict the acceptability of a dialogue) on the Persuasion For Good Dataset (Wang et al., 2019):
Given a complete dialogue from (or in the style of) Persuasion For Good, the task is to predict a numeric score typically in the range (-3, 3) where a higher score means a more acceptable dialogue in context of the donation solicitation task.
Example input: How are you?</s>Good! how about yourself?</s>Great. Would you like to donate today to help the children?</s>
For more context and usage information see https://github.rpi.edu/LACAI/dialogue-progression.
- Downloads last month
- 10