ZeppelinCorp/Eclipse_Corpuz
Updated • 14
How to use ZeppelinCorp/Charm_15 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ZeppelinCorp/Charm_15") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ZeppelinCorp/Charm_15")
model = AutoModelForCausalLM.from_pretrained("ZeppelinCorp/Charm_15")How to use ZeppelinCorp/Charm_15 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ZeppelinCorp/Charm_15"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ZeppelinCorp/Charm_15",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/ZeppelinCorp/Charm_15
How to use ZeppelinCorp/Charm_15 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ZeppelinCorp/Charm_15" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ZeppelinCorp/Charm_15",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ZeppelinCorp/Charm_15" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ZeppelinCorp/Charm_15",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use ZeppelinCorp/Charm_15 with Docker Model Runner:
docker model run hf.co/ZeppelinCorp/Charm_15
Charm 15 is an advanced AI model designed for deep reasoning, reinforcement learning, and multimodal AI capabilities. It integrates cutting-edge deep learning techniques to provide safe, intelligent, and scalable solutions.
git clone https://huggingface.co/charm15/charm15.git
cd charm15
pip install torch transformers onnxruntime tensorflow
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("charm15/charm15_base_model")
tokenizer = AutoTokenizer.from_pretrained("charm15/charm15_base_model")
input_text = "Explain the Pythagorean theorem."
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Charm 15 includes bias mitigation and safety filters to ensure responsible AI usage. Read the ethics guidelines for more details.
Licensed under Apache 2.0.
Base model
mattshumer/mistral-8x7b-chat