Qwen3.6-35B-A3B
Collection
10 items β’ Updated
This repository contains the Q4_K_S GGUF format of the Qwen3.6-35B-A3B model.
These files were quantized by Abiray using llama.cpp to make the model accessible for consumer hardware and CPU-heavy environments.
I have processed this model into several different quantization formats. You can find them in my other repositories:
You can run this model locally using llama-cli from the llama.cpp project.
# Example command (adjust threads and context size to your machine)
./llama-cli -m Qwen3.6-35B-A3B-Q4_K_S.gguf -p "Your prompt here" -n 512 -t 8 -c 4096
4-bit
Base model
Qwen/Qwen3.6-35B-A3B