Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive

Join the Discord for updates, roadmaps, projects, or just to chat.

Qwen3 4B 2507 Instruct uncensored by HauhauCS.

About

No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.

These are meant to be the best lossless uncensored models out there.

Aggressive vs Balanced

Aggressive applies stronger uncensoring. Use this when you need no refusals.

Downloads

File Quant Size
Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-FP16.gguf FP16 7.5 GB
Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q8_0.gguf Q8_0 4.0 GB
Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q6_K.gguf Q6_K 3.1 GB
Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf Q4_K_M 2.4 GB

Specs

Recommended Settings

From the Qwen team:

Thinking mode (default):

  • temperature=0.6
  • top_p=0.95
  • top_k=20
  • min_p=0

Non-thinking mode:

  • Add /no_think at the end of your prompt, or
  • temperature=0.7
  • top_p=0.8
  • top_k=20
  • min_p=0

Important:

  • Use --jinja flag for proper chat template handling
  • Thinking mode produces <think>...</think> tags before responses

Usage

Works with llama.cpp, LM Studio, Jan, koboldcpp, Ollama, etc.

# llama.cpp example
./llama-cli -m Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf \
  -p "Hello" --jinja -c 8192
Downloads last month
4,011
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HauhauCS/Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive

Quantized
(229)
this model