Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:53851
loss:MultipleNegativesRankingLoss
text-embeddings-inference
Instructions to use danthepol/MNLP_M3_document_encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use danthepol/MNLP_M3_document_encoder with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("danthepol/MNLP_M3_document_encoder") sentences = [ "A certain junior class has 1000 students and a certain senior class has 900 students. Among these students, there are 60 siblings pairs each consisting of 1 junior and 1 senior. If 1 student is to be selected at random from each class, what is the probability that the 2 students selected will be a sibling pair?", "Let's see Pick 60/1000 first Then we can only pick 1 other pair from the 800 So total will be 60 / 900 *1000 Simplify and you get 2/30000", "To maximize number of hot dogs with 300$ Total number of hot dogs bought in 250-pack = 22.95*13 =298.35$ Amount remaining = 300 - 298.35 = 1.65$ This amount is too less to buy any 8- pack . Greatest number of hot dogs one can buy with 300 $ = 250*13 = 3250", "artificial leg" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Ctrl+K