Huihui Qwen3.6-35B A3B Abliterated
This repository contains a standalone Q4_K_M GGUF quantization of the huihui-ai/Huihui-Qwen3.6-35B-A3B-abliterated model.
This specific repository is designed for users who want to quickly download the highly-recommended Q4_K_M format without cloning a massive directory containing multiple quant sizes.
๐ฆ Looking for other sizes? If you need higher precision (like
Q6_KorQ8_0) or smaller files for heavily constrained hardware (likeQ3_K_M), you can find the complete list of quantizations in my main repository here: Abiray/Huihui-Qwen3.6-35B-A3B-abliterated-GGUF
Why Q4_K_M?
The Q4_K_M quantization is widely considered the optimal "sweet spot" for running Large Language Models locally. It offers an excellent balance: practically indistinguishable response quality compared to the massive unquantized base model, while requiring significantly less RAM and computing power.
Model Characteristics
- Abliterated: This model has been stripped of standard safety alignments and refusals, making it an excellent, compliant engine for, creative storytelling, and unrestricted text generation.
- Downloads last month
- 311
4-bit
Model tree for Abiray/Huihui-Qwen3.6-35B-A3B-abliterated-Q4_K_M-GGUF
Base model
Qwen/Qwen3.6-35B-A3B