Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive
Join the Discord for updates, roadmaps, projects, or just to chat.
Qwen3 4B 2507 Instruct uncensored by HauhauCS.
About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lossless uncensored models out there.
Aggressive vs Balanced
Aggressive applies stronger uncensoring. Use this when you need no refusals.
Downloads
| File | Quant | Size |
|---|---|---|
| Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-FP16.gguf | FP16 | 7.5 GB |
| Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q8_0.gguf | Q8_0 | 4.0 GB |
| Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q6_K.gguf | Q6_K | 3.1 GB |
| Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf | Q4_K_M | 2.4 GB |
Specs
- 4B parameters (dense)
- 262K context
- Based on Qwen/Qwen3-4B-Instruct-2507
Recommended Settings
From the Qwen team:
Thinking mode (default):
temperature=0.6top_p=0.95top_k=20min_p=0
Non-thinking mode:
- Add
/no_thinkat the end of your prompt, or temperature=0.7top_p=0.8top_k=20min_p=0
Important:
- Use
--jinjaflag for proper chat template handling - Thinking mode produces
<think>...</think>tags before responses
Usage
Works with llama.cpp, LM Studio, Jan, koboldcpp, Ollama, etc.
# llama.cpp example
./llama-cli -m Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf \
-p "Hello" --jinja -c 8192
- Downloads last month
- 4,011
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for HauhauCS/Qwen3-4B-2507-Instruct-Uncensored-HauhauCS-Aggressive
Base model
Qwen/Qwen3-4B-Instruct-2507