Experimental only. No DeepSeek Sparse Attention supported by llama.cpp currently. So using regular attention from Deepseek V3.1
Should be working with main branch from llama.cpp.
- Downloads last month
- 18
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for lovedheart/DeepSeek-V3.2-GGUF-Experimental
Base model
deepseek-ai/DeepSeek-V3.2-Exp-Base Finetuned
deepseek-ai/DeepSeek-V3.2