Qwen3.6-35B-A3B-abliterated-MAX-F32-GGUF

Qwen3.6-35B-A3B-Abliterated-MAX is an abliterated evolution built on top of Qwen/Qwen3.6-35B-A3B. This model applies advanced refusal direction analysis and ablation-based training strategies to reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful 35B parameter Mixture-of-Experts language model optimized for detailed responses and improved instruction adherence.

Model Files

File Name Quant Type File Size File Link
Qwen3.6-35B-A3B-abliterated-MAX.BF16.gguf BF16 69.4 GB Download
Qwen3.6-35B-A3B-abliterated-MAX.F16.gguf F16 69.4 GB Download
Qwen3.6-35B-A3B-abliterated-MAX.F32.gguf F32 139 GB Download
Qwen3.6-35B-A3B-abliterated-MAX.Q8_0.gguf Q8_0 36.9 GB Download
Qwen3.6-35B-A3B-abliterated-MAX.mmproj-bf16.gguf mmproj-bf16 1.66 kB Download
Qwen3.6-35B-A3B-abliterated-MAX.mmproj-f16.gguf mmproj-f16 1.66 kB Download
Qwen3.6-35B-A3B-abliterated-MAX.mmproj-f32.gguf mmproj-f32 1.66 kB Download
Qwen3.6-35B-A3B-abliterated-MAX.mmproj-q8_0.gguf mmproj-q8_0 1.66 kB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
-
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3.6-35B-A3B-abliterated-MAX-F32-GGUF

Quantized
(1)
this model

Collection including prithivMLmods/Qwen3.6-35B-A3B-abliterated-MAX-F32-GGUF