GGUF

This is a duplicate of n-Arno/Anima-P3-Turbo-AIO-Q4_K, archived for the space Luminia/Anima-2B-CPU.

This is a "All In One" version of Anima-Preview3 for the GPU Poor:

  • Made specifically for Stable-diffusion.cpp, it will not work with ComfyUI/Forge (the text encoder prefix is different)
  • It was manually concatenated (VAE+LLM+DiT) using torch and merged with the Turbo LoRA
  • It was then quantized using sd.cpp: sd-cli -M convert -m Anima-P3-Turbo-AIO.safetensors --type q4_K -o ANIMA/Anima-P3-Turbo-AIO-Q4_K.gguf

Parameters:

  • CFG 1
  • sampler: er_sde
  • scheduler: smoothstep (~beta)
  • steps: 16

Example usage:

sd-cli -m Anima-P3-Turbo-AIO-Q4_K.gguf --vae-tiling --fa --offload-to-cpu --steps 16 --cfg-scale 1 -W 832 -H 1216 --sampling-method er_sde --scheduler smoothstep --cache-mode spectrum --seed 666 -p "score_9, masterpiece, best quality, newest, 1girl, solo, cowboy shot, facing viewer, holding a board with 'AIO ANIMA TURBO' written on it"

And as a server (with a LoRA folder provided):

sd-server -m Anima-P3-Turbo-AIO-Q4_K.gguf --vae-tiling --offload-to-cpu --fa -l 0.0.0.0 --lora-model-dir ./lora --steps 16 --cfg-scale 1 -W 832 -H 1216 --sampling-method er_sde --scheduler smoothstep --cache-mode spectrum --seed 666

Downloads last month
7
GGUF
Model size
3B params
Architecture
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for WeReCooking/Anima-P3-Turbo-AIO-Q4_K

Quantized
(15)
this model