Instructions to use tedlasai/learn2refocus with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use tedlasai/learn2refocus with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("tedlasai/learn2refocus", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Add model card metadata and links for Learning to Refocus with Video Diffusion Models
#2
by nielsr HF Staff - opened
This PR improves the model card for "Learning to Refocus with Video Diffusion Models" by adding crucial metadata and enhancing its documentation.
Key changes include:
- Adding
pipeline_tag: image-to-videoto correctly categorize the model on the Hub, reflecting its capability to generate video sequences from images. - Adding
library_name: diffusers, as evidenced by_diffusers_versionin theconfig.jsonfiles, to enable automated usage snippets. - Including direct links to the project page and the GitHub repository for easy access to further information and code.
- Providing a concise description of the model's purpose, based on the paper's abstract.
- Adding a citation section with the BibTeX entry from the project's GitHub README.
These updates will make the model more discoverable and user-friendly on the Hugging Face Hub.
tedlasai changed pull request status to merged
thank you!