Huihui Qwen3.6-35B A3B Abliterated

This repository contains a standalone Q4_K_M GGUF quantization of the huihui-ai/Huihui-Qwen3.6-35B-A3B-abliterated model.

This specific repository is designed for users who want to quickly download the highly-recommended Q4_K_M format without cloning a massive directory containing multiple quant sizes.

๐Ÿ“ฆ Looking for other sizes? If you need higher precision (like Q6_K or Q8_0) or smaller files for heavily constrained hardware (like Q3_K_M), you can find the complete list of quantizations in my main repository here: Abiray/Huihui-Qwen3.6-35B-A3B-abliterated-GGUF

Why Q4_K_M?

The Q4_K_M quantization is widely considered the optimal "sweet spot" for running Large Language Models locally. It offers an excellent balance: practically indistinguishable response quality compared to the massive unquantized base model, while requiring significantly less RAM and computing power.

Model Characteristics

  • Abliterated: This model has been stripped of standard safety alignments and refusals, making it an excellent, compliant engine for, creative storytelling, and unrestricted text generation.
Downloads last month
311
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Abiray/Huihui-Qwen3.6-35B-A3B-abliterated-Q4_K_M-GGUF

Quantized
(167)
this model

Collection including Abiray/Huihui-Qwen3.6-35B-A3B-abliterated-Q4_K_M-GGUF