Instructions to use MTSAIR/multi_verse_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MTSAIR/multi_verse_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MTSAIR/multi_verse_model") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MTSAIR/multi_verse_model") model = AutoModelForCausalLM.from_pretrained("MTSAIR/multi_verse_model") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MTSAIR/multi_verse_model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MTSAIR/multi_verse_model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MTSAIR/multi_verse_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/MTSAIR/multi_verse_model
- SGLang
How to use MTSAIR/multi_verse_model with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MTSAIR/multi_verse_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MTSAIR/multi_verse_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MTSAIR/multi_verse_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MTSAIR/multi_verse_model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use MTSAIR/multi_verse_model with Docker Model Runner:
docker model run hf.co/MTSAIR/multi_verse_model
Curious about the meaning behind this...
fine-tuned my "knowledge-absorption" process
Is this through new modeling, dataset structure (prompting) and/or the actual step-by-step process in which you are achieving the final results? For example, this model is actually the combination of multiple fine-tunes on custom dataset to enhance the knowledge absorption process?
I ask because it reminds me of something I am trying to work out with my dataset and training using an 'Internal Knowledge Map' (https://huggingface.co/datasets/Severian/Internal-Knowledge-Map). Are we approaching similar ideas from way different perspectives? Or am I just filling in the blanks and way off course here? lol
Regardless, awesome work! Excited to see the process behind your approach if you feel like sharing
Hello @Severian , thanks for you interest. Unfortunatly i am not able to reveal information about the training process before we got acceptence for the paper, But i can say that our work for this model is not related to dataset strcture. I would love to get more info about your ideas seems interesting for me
Thanks for the response @ammarali32 ! I absolutely understand not being able to share, so no worries at all. I hope the paper gets accepted and you (and your team, if you have one) get some due credit for great work! After lots of testing, your models outputs are truly interesting and diverse. I'd be happy to talk about my ideas and share any insights/data anytime. You can connect with me on LinkedIn or Discord
I've been so curious about this and after poking around, I feel I might have an idea of what the whole new 'mult-verse' could approach be. Once again, could totally be in the wrong, so apologies. I want to also be careful and not give away any potential clues to your secret sauce, but if CosPlace has something to do with it; I think you just went next level and I can see why your models are performing so well. Amazing work!
(Feel free to hide this if it does give anything away accidentally)