Mixed Precision GGUF layer quantization of Qwen3-VL-30B-A3B-Instruct by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Instruct
The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant was optimized for high performance across a set of test prompts with ~IQ4_XS size. The model shows occasional rep fails on test prompts when using greedy sampling, where the model goes into an infinite gen loop on a rep pattern. This problem could not be solved by adjusting layer quants so it appears to be baked into the model training. The 32B dense model (VL 32B) does not exhibit this failure mode.
The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:
Q4_K_L : attn_v = q6_k attn_o = q6_k ffn_d = q6_k
Q5_K_L : attn_v = q8_0 attn_o = q6_k ffn_d = q6_k
Q6_K_S : Q6_K
LAYER_TYPES='[
[0 ,"Q6_K_S"],[1 ,"Q5_K_S"],[2 ,"Q3_K_L"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q3_K_L"],[29,"Q3_K_L"],[30,"Q3_K_L"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
[40,"Q4_K_S"],[41,"Q4_K_S"],[42,"Q4_K_M"],[43,"Q4_K_L"],[44,"Q5_K_S"],[45,"Q5_K_M"],[46,"Q5_K_L"],[47,"Q6_K_S"]
]'
FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"
Comparison:
| Quant | size | PPL | Comment |
|---|---|---|---|
| IQ4_XS | 16.6e9 | 7.1 | IQ4_XS with default embedding and output |
| Q4_K_H | 16.9e9 | 7.1 | Hybrid quant with Q6_K embedding Q6_K output |
Usage:
Qwen3-VL-30B-A3B Instruct is a vision capable moe model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
This moe model can be run efficiently by offloading expert layers to CPU. Some example configs for use with a 12G VRAM GPU:
# Offload all experts to CPU, maximize context size on GPU : 22tps gen rate on 9900k+4070
OT="-ot exps=CPU -ngl 99"
# Offload only layers 30 to 47 experts to CPU for max inference speed with usable context size : 33tps gen rate on 9900k+4070
OT="-ot blk\.3[0-9]|4[0-7].*exps=CPU -ngl 99"
# Offload layers 25 to 47 experts to CPU to give bigger context size with still high gen speed : 27tps gen rate on 9900k+4070
OT="-ot blk\.2[5-9]|3[0-9]|4[0-7].*exps=CPU -ngl 99"
Llama.cpp minimum version to run Qwen3-VL series should be 6915 with recommended 6936 and above.
Benchmarks:
A full set of vision benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
| Link | Type | Size/e9 B | Notes |
|---|---|---|---|
| Qwen3-VL-30B-A3B-Instruct.Q4_K_H.gguf | Q4_K_H | 16.9e9 B | ~IQ4_XS size |
| Qwen3-VL-30B-A3B-Instruct.mmproj.gguf | F16 | 1.1e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 15
Model tree for steampunque/Qwen3-VL-30B-A3B-Instruct-MP-GGUF
Base model
Qwen/Qwen3-VL-30B-A3B-Instruct