Instructions to use ModalityDance/latent-tts-colar with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ModalityDance/latent-tts-colar with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ModalityDance/latent-tts-colar") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, ColarLlama tokenizer = AutoTokenizer.from_pretrained("ModalityDance/latent-tts-colar") model = ColarLlama.from_pretrained("ModalityDance/latent-tts-colar") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ModalityDance/latent-tts-colar with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ModalityDance/latent-tts-colar" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ModalityDance/latent-tts-colar", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ModalityDance/latent-tts-colar
- SGLang
How to use ModalityDance/latent-tts-colar with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ModalityDance/latent-tts-colar" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ModalityDance/latent-tts-colar", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ModalityDance/latent-tts-colar" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ModalityDance/latent-tts-colar", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ModalityDance/latent-tts-colar with Docker Model Runner:
docker model run hf.co/ModalityDance/latent-tts-colar
CoLaR Model
Overview
CoLaR (Compressed Latent Reasoning) is a latent reasoning model based on LLaMA that uses a specialized LatentHead module for generating continuous latent representations. This model is part of the Parallel Test-Time Scaling for Latent Reasoning Models framework.
- Paper: Parallel Test-Time Scaling for Latent Reasoning Models
- Code: https://github.com/ModalityDance/LatentTTS
Model Details
- Base Architecture: LLaMA Language Model
- Model Class:
ColarLlama(extendsLlamaForCausalLM) - Special Features: LatentHead module for latent space generation
- Latent Tokens: Uses special token
<|latent|>for latent reasoning - End Token: Uses
###as the end-of-latent marker - Input Format: Direct input format with latent tokens
Related Models
This repository includes other latent reasoning models that you might find useful:
Installation
Download the model from HuggingFace:
huggingface-cli download ModalityDance/latent-tts-colar --local-dir checkpoints/colar
Quick Start
Basic Usage
import torch
from transformers import AutoTokenizer
from src.generation_mixin import LatentGenerationMixin, LatentGenerationConfig
from src.paths import MODELS
# Load tokenizer
model_id = "checkpoints/colar"
tokenizer = AutoTokenizer.from_pretrained(model_id)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Get latent token IDs
latent_id = tokenizer.convert_tokens_to_ids("<|latent|>")
end_id = tokenizer.convert_tokens_to_ids("###")
# Create model class with generation mixin
class LatentCoLaR(MODELS["colar"]["class"], LatentGenerationMixin):
pass
# Load model
model = LatentCoLaR.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16, # Recommended for LLaMA models
)
# Prepare input
question = "What is 2 + 2?<|latent|>"
inputs = tokenizer(question, return_tensors="pt").to(model.device)
# Configure generation
generation_config = LatentGenerationConfig(
max_new_tokens=128,
max_latent_length=64, # CoLaR uses max_latent_length instead of latent_length
latent_do_sample=True,
latent_do_sample_by="dropout", # or "noise"
dropout_p=0.1,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
# Generate
output = model.generate(
**inputs,
generation_config=generation_config,
num_return_sequences=1,
)
# Decode result
result = tokenizer.decode(output[0], skip_special_tokens=True)
print(result)
Batch Processing
The model fully supports batch processing with Transformers:
import torch
# Prepare batch inputs
questions = [
"What is 2 + 2?<|latent|>",
"What is 5 * 3?<|latent|>",
"What is 10 - 4?<|latent|>",
]
inputs = tokenizer(questions, return_tensors="pt", padding=True).to(model.device)
# Generate for batch
outputs = model.generate(
**inputs,
generation_config=generation_config,
num_return_sequences=1,
)
# Decode batch results
results = tokenizer.batch_decode(outputs, skip_special_tokens=True)
for result in results:
print(result)
Model Architecture
LatentHead Module
CoLaR uses a specialized LatentHead for generating latent representations:
class LatentHead(nn.Module):
def __init__(self, feature_size, intermediate_size=512):
super().__init__()
self.fc = nn.Sequential(
nn.Linear(feature_size, intermediate_size),
nn.GELU(),
nn.Linear(intermediate_size, intermediate_size),
nn.LayerNorm(intermediate_size),
)
self.mean = nn.Linear(intermediate_size, feature_size)
The latent embeddings are scaled by latent_embedding_std (default: 0.018 for LLaMA-3.2 models).
Generation Parameters
LatentGenerationConfig
max_new_tokens(int): Maximum number of tokens to generatemax_latent_length(int): Maximum number of latent tokens (default: 64)latent_do_sample(bool): Whether to use stochastic samplinglatent_do_sample_by(str): Sampling method -"dropout"or"noise"dropout_p(float): Dropout probability for Monte Carlo Dropout (e.g., 0.1)noise_std(float): Standard deviation for Additive Gaussian Noise
Sampling Methods
Monte Carlo Dropout: Randomly drops activations during forward passes
generation_config = LatentGenerationConfig( latent_do_sample_by="dropout", dropout_p=0.1, # ... )Additive Gaussian Noise: Injects noise into latent embeddings
generation_config = LatentGenerationConfig( latent_do_sample_by="noise", noise_std=0.1, # ... )
Answer Extraction
CoLaR uses a special answer format with "Answer:" prefix:
from src.paths import colar_extract_answer_number
# Extract answer from generated text
answer = colar_extract_answer_number(result)
print(f"Answer: {answer}")
Evaluation
Run evaluation using the provided scripts:
# For CoLaR (LLaMA based models)
./run_tests_llama.sh
Model Card
- Paper: Parallel Test-Time Scaling for Latent Reasoning Models
- HuggingFace: ModalityDance/latent-tts-colar
- Benchmarks: GSM8K Test, GSM8K Hard, MultiArith
Notes
- Data Type: Recommended to use
torch.bfloat16ortorch.float16for LLaMA models - Memory: LLaMA models typically require more GPU memory than GPT-2 models
- Latent Length: CoLaR uses
max_latent_lengthinstead of fixedlatent_length
Citation
If you use this model, please cite:
@misc{you2025paralleltesttimescalinglatent,
title={Parallel Test-Time Scaling for Latent Reasoning Models},
author={Runyang You and Yongqi Li and Meng Liu and Wenjie Wang and Liqiang Nie and Wenjie Li},
year={2025},
eprint={2510.07745},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.07745},
}
@misc{tan2025thinksilentlythinkfast,
title={Think Silently, Think Fast: Dynamic Latent Compression of LLM Reasoning Chains},
author={Wenhui Tan and Jiaze Li and Jianzhong Ju and Zhenbo Luo and Jian Luan and Ruihua Song},
year={2025},
eprint={2505.16552},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.16552},
}
- Downloads last month
- 9
Model tree for ModalityDance/latent-tts-colar
Base model
meta-llama/Llama-3.2-1B-Instruct