One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models
Paper • 2505.21960 • Published • 5
How to use senmaonk/loopfree-sd1.5 with Diffusers:
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("senmaonk/loopfree-sd1.5", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("senmaonk/loopfree-sd1.5", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]This model implements the method described in the paper One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models.
Github repository: https://github.com/sen-mao/Loopfree