Video-R1/Video-R1-data
Viewer • Updated • 61.2k • 3.37k • 24
How to use array/Qwen2.5-VL-SAT with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="array/Qwen2.5-VL-SAT", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
pipe(text=messages) # Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("array/Qwen2.5-VL-SAT", trust_remote_code=True)
model = AutoModelForImageTextToText.from_pretrained("array/Qwen2.5-VL-SAT", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use array/Qwen2.5-VL-SAT with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "array/Qwen2.5-VL-SAT"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "array/Qwen2.5-VL-SAT",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker model run hf.co/array/Qwen2.5-VL-SAT
How to use array/Qwen2.5-VL-SAT with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "array/Qwen2.5-VL-SAT" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "array/Qwen2.5-VL-SAT",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "array/Qwen2.5-VL-SAT" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "array/Qwen2.5-VL-SAT",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'How to use array/Qwen2.5-VL-SAT with Docker Model Runner:
docker model run hf.co/array/Qwen2.5-VL-SAT
A strong spatial Qwen 2.5 VL baseline.
Post-trained on SAT, and just the answers from Video-R1. The exact mix is 60% SAT, 40% Video-R1.
% pip install git+https://github.com/huggingface/transformers accelerate
% pip install qwen-vl-utils[decord]==0.0.8
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("array/Qwen2.5-VL-SAT")
processor = AutoProcessor.from_pretrained(
exp_confs["model_path"],
trust_remote_code=model_config.trust_remote_code
)
Please see the paper for details on training and evaluation datasets and metrics.
| Model | MV | RelDep | SpRel | Jig | IQT | BLINK Avg | BLINK Reas | SAT-R | VSI Avg | VSI Reas | ERQA | Avg (All) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Qwen2.5-VL (7B) | 39.00 | 61.29 | 92.38 | 58.66 | 25.33 | 55.33 | 41.00 | 59.00 | 23.96 | 22.96 | 38.91 | 44.30 |
| + SAT | 57.14 | 87.09 | 74.12 | 58.66 | 30.00 | 61.40 | 48.60 | 71.66 | 32.40 | 30.65 | 38.00 | 50.87 |
@misc{ray2025satdynamicspatialaptitude,
title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models},
author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
year={2025},
eprint={2412.07755},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.07755},
}
Base model
Qwen/Qwen2.5-VL-7B-Instruct