Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"🧨 Diffusers","local":"-diffusers","sections":[{"title":"🧨 Diffusers pipelines","local":"-diffusers-pipelines","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/diffusers/pr_12652/zh/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/entry/start.ca7a833f.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/scheduler.e4ff9b64.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/singletons.71526a34.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/index.f9be34a7.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/paths.0df57e7f.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/entry/app.746b83f3.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/preload-helper.bb94e341.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/index.09f1bca0.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/nodes/0.8237e20e.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/nodes/10.931c9350.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/pr_12652/zh/_app/immutable/chunks/MermaidChart.svelte_svelte_type_style_lang.3bffcf96.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"🧨 Diffusers","local":"-diffusers","sections":[{"title":"🧨 Diffusers pipelines","local":"-diffusers-pipelines","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <p align="center" data-svelte-h="svelte-aksdn0"><br> <img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"> <br></p> <div class="items-center shrink-0 min-w-[100px] max-sm:min-w-[50px] justify-end ml-auto flex" style="float: right; margin-left: 10px; display: inline-flex; position: relative; z-index: 10;"><div class="inline-flex rounded-md max-sm:rounded-sm"><button class="inline-flex items-center gap-1 h-7 max-sm:h-7 px-2 max-sm:px-1.5 text-sm font-medium text-gray-800 border border-r-0 rounded-l-md max-sm:rounded-l-sm border-gray-200 bg-white hover:shadow-inner dark:border-gray-850 dark:bg-gray-950 dark:text-gray-200 dark:hover:bg-gray-800" aria-live="polite"><span class="inline-flex items-center justify-center rounded-md p-0.5 max-sm:p-0 hover:text-gray-800 dark:hover:text-gray-200"><svg class="sm:size-3.5 size-3" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg></span> <span>Copy page</span></button> <button class="inline-flex items-center justify-center w-6 max-sm:w-5 h-7 max-sm:h-7 disabled:pointer-events-none text-sm text-gray-500 hover:text-gray-700 dark:hover:text-white rounded-r-md max-sm:rounded-r-sm border border-l transition border-gray-200 bg-white hover:shadow-inner dark:border-gray-850 dark:bg-gray-950 dark:text-gray-200 dark:hover:bg-gray-800" aria-haspopup="menu" aria-expanded="false" aria-label="Open copy menu"><svg class="transition-transform text-gray-400 overflow-visible sm:size-3.5 size-3 rotate-0" width="1em" height="1em" viewBox="0 0 12 7" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M1 1L6 6L11 1" stroke="currentColor"></path></svg></button></div> </div> <h1 class="relative group"><a id="-diffusers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#-diffusers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>🧨 Diffusers</span></h1> <p data-svelte-h="svelte-7h3gty">🤗 Diffusers 是一个值得首选用于生成图像、音频甚至 3D 分子结构的,最先进的预训练扩散模型库。 | |
| 无论您是在寻找简单的推理解决方案,还是想训练自己的扩散模型,🤗 Diffusers 这一模块化工具箱都能对其提供支持。 | |
| 本库的设计更偏重于<a href="conceptual/philosophy#usability-over-performance">可用而非高性能</a>、<a href="conceptual/philosophy#simple-over-easy">简明而非简单</a>以及<a href="conceptual/philosophy#tweakable-contributorfriendly-over-abstraction">易用而非抽象</a>。</p> <p data-svelte-h="svelte-f0h4hq">本库包含三个主要组件:</p> <ul data-svelte-h="svelte-qatawh"><li>最先进的扩散管道 <a href="api/pipelines/overview">diffusion pipelines</a>,只需几行代码即可进行推理。</li> <li>可交替使用的各种噪声调度器 <a href="api/schedulers/overview">noise schedulers</a>,用于平衡生成速度和质量。</li> <li>预训练模型 <a href="api/models">models</a>,可作为构建模块,并与调度程序结合使用,来创建您自己的端到端扩散系统。</li></ul> <div class="mt-10" data-svelte-h="svelte-1kh56sb"><div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"><a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview"><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time!</p></a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques.</p></a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy"><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library.</p></a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models"><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Technical descriptions of how 🤗 Diffusers classes and methods work.</p></a></div></div> <h2 class="relative group"><a id="-diffusers-pipelines" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#-diffusers-pipelines"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>🧨 Diffusers pipelines</span></h2> <p data-svelte-h="svelte-q0dceg">下表汇总了当前所有官方支持的pipelines及其对应的论文.</p> <table data-svelte-h="svelte-1j63o7n"><thead><tr><th>管道</th> <th>论文/仓库</th> <th align="center">任务</th></tr></thead> <tbody><tr><td><a href="./api/pipelines/alt_diffusion">alt_diffusion</a></td> <td><a href="https://huggingface.co/papers/2211.06679" rel="nofollow">AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities</a></td> <td align="center">Image-to-Image Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/audio_diffusion">audio_diffusion</a></td> <td><a href="https://github.com/teticio/audio-diffusion.git" rel="nofollow">Audio Diffusion</a></td> <td align="center">Unconditional Audio Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/controlnet">controlnet</a></td> <td><a href="https://huggingface.co/papers/2302.05543" rel="nofollow">Adding Conditional Control to Text-to-Image Diffusion Models</a></td> <td align="center">Image-to-Image Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/cycle_diffusion">cycle_diffusion</a></td> <td><a href="https://huggingface.co/papers/2210.05559" rel="nofollow">Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance</a></td> <td align="center">Image-to-Image Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/dance_diffusion">dance_diffusion</a></td> <td><a href="https://github.com/williamberman/diffusers.git" rel="nofollow">Dance Diffusion</a></td> <td align="center">Unconditional Audio Generation</td></tr> <tr><td><a href="./api/pipelines/ddpm">ddpm</a></td> <td><a href="https://huggingface.co/papers/2006.11239" rel="nofollow">Denoising Diffusion Probabilistic Models</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/ddim">ddim</a></td> <td><a href="https://huggingface.co/papers/2010.02502" rel="nofollow">Denoising Diffusion Implicit Models</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./if">if</a></td> <td><a href="./api/pipelines/if"><strong>IF</strong></a></td> <td align="center">Image Generation</td></tr> <tr><td><a href="./if">if_img2img</a></td> <td><a href="./api/pipelines/if"><strong>IF</strong></a></td> <td align="center">Image-to-Image Generation</td></tr> <tr><td><a href="./if">if_inpainting</a></td> <td><a href="./api/pipelines/if"><strong>IF</strong></a></td> <td align="center">Image-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/latent_diffusion">latent_diffusion</a></td> <td><a href="https://huggingface.co/papers/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/latent_diffusion">latent_diffusion</a></td> <td><a href="https://huggingface.co/papers/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td> <td align="center">Super Resolution Image-to-Image</td></tr> <tr><td><a href="./api/pipelines/latent_diffusion_uncond">latent_diffusion_uncond</a></td> <td><a href="https://huggingface.co/papers/2112.10752" rel="nofollow">High-Resolution Image Synthesis with Latent Diffusion Models</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/paint_by_example">paint_by_example</a></td> <td><a href="https://huggingface.co/papers/2211.13227" rel="nofollow">Paint by Example: Exemplar-based Image Editing with Diffusion Models</a></td> <td align="center">Image-Guided Image Inpainting</td></tr> <tr><td><a href="./api/pipelines/pndm">pndm</a></td> <td><a href="https://huggingface.co/papers/2202.09778" rel="nofollow">Pseudo Numerical Methods for Diffusion Models on Manifolds</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/score_sde_ve">score_sde_ve</a></td> <td><a href="https://openreview.net/forum?id=PxTIG12RRHS" rel="nofollow">Score-Based Generative Modeling through Stochastic Differential Equations</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/score_sde_vp">score_sde_vp</a></td> <td><a href="https://openreview.net/forum?id=PxTIG12RRHS" rel="nofollow">Score-Based Generative Modeling through Stochastic Differential Equations</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/semantic_stable_diffusion">semantic_stable_diffusion</a></td> <td><a href="https://huggingface.co/papers/2301.12247" rel="nofollow">Semantic Guidance</a></td> <td align="center">Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/text2img">stable_diffusion_text2img</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/img2img">stable_diffusion_img2img</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td> <td align="center">Image-to-Image Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/inpaint">stable_diffusion_inpaint</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-public-release" rel="nofollow">Stable Diffusion</a></td> <td align="center">Text-Guided Image Inpainting</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/panorama">stable_diffusion_panorama</a></td> <td><a href="https://multidiffusion.github.io/" rel="nofollow">MultiDiffusion</a></td> <td align="center">Text-to-Panorama Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/pix2pix">stable_diffusion_pix2pix</a></td> <td><a href="https://huggingface.co/papers/2211.09800" rel="nofollow">InstructPix2Pix: Learning to Follow Image Editing Instructions</a></td> <td align="center">Text-Guided Image Editing</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/pix2pix_zero">stable_diffusion_pix2pix_zero</a></td> <td><a href="https://pix2pixzero.github.io/" rel="nofollow">Zero-shot Image-to-Image Translation</a></td> <td align="center">Text-Guided Image Editing</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/attend_and_excite">stable_diffusion_attend_and_excite</a></td> <td><a href="https://huggingface.co/papers/2301.13826" rel="nofollow">Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models</a></td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/self_attention_guidance">stable_diffusion_self_attention_guidance</a></td> <td><a href="https://huggingface.co/papers/2210.00939" rel="nofollow">Improving Sample Quality of Diffusion Models Using Self-Attention Guidance</a></td> <td align="center">Text-to-Image Generation Unconditional Image Generation</td></tr> <tr><td><a href="./stable_diffusion/image_variation">stable_diffusion_image_variation</a></td> <td><a href="https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations" rel="nofollow">Stable Diffusion Image Variations</a></td> <td align="center">Image-to-Image Generation</td></tr> <tr><td><a href="./stable_diffusion/latent_upscale">stable_diffusion_latent_upscale</a></td> <td><a href="https://twitter.com/StabilityAI/status/1590531958815064065" rel="nofollow">Stable Diffusion Latent Upscaler</a></td> <td align="center">Text-Guided Super Resolution Image-to-Image</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion/model_editing">stable_diffusion_model_editing</a></td> <td><a href="https://time-diffusion.github.io/" rel="nofollow">Editing Implicit Assumptions in Text-to-Image Diffusion Models</a></td> <td align="center">Text-to-Image Model Editing</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td> <td align="center">Text-Guided Image Inpainting</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td> <td><a href="https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion" rel="nofollow">Depth-Conditional Stable Diffusion</a></td> <td align="center">Depth-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion_2">stable_diffusion_2</a></td> <td><a href="https://stability.ai/blog/stable-diffusion-v2-release" rel="nofollow">Stable Diffusion 2</a></td> <td align="center">Text-Guided Super Resolution Image-to-Image</td></tr> <tr><td><a href="./api/pipelines/stable_diffusion_safe">stable_diffusion_safe</a></td> <td><a href="https://huggingface.co/papers/2211.05105" rel="nofollow">Safe Stable Diffusion</a></td> <td align="center">Text-Guided Generation</td></tr> <tr><td><a href="./stable_unclip">stable_unclip</a></td> <td>Stable unCLIP</td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./stable_unclip">stable_unclip</a></td> <td>Stable unCLIP</td> <td align="center">Image-to-Image Text-Guided Generation</td></tr> <tr><td><a href="./api/pipelines/stochastic_karras_ve">stochastic_karras_ve</a></td> <td><a href="https://huggingface.co/papers/2206.00364" rel="nofollow">Elucidating the Design Space of Diffusion-Based Generative Models</a></td> <td align="center">Unconditional Image Generation</td></tr> <tr><td><a href="./api/pipelines/text_to_video">text_to_video_sd</a></td> <td><a href="https://modelscope.cn/models/damo/text-to-video-synthesis/summary" rel="nofollow">Modelscope’s Text-to-video-synthesis Model in Open Domain</a></td> <td align="center">Text-to-Video Generation</td></tr> <tr><td><a href="./api/pipelines/unclip">unclip</a></td> <td><a href="https://huggingface.co/papers/2204.06125" rel="nofollow">Hierarchical Text-Conditional Image Generation with CLIP Latents</a>(implementation by <a href="https://github.com/kakaobrain/karlo" rel="nofollow">kakaobrain</a>)</td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td> <td><a href="https://huggingface.co/papers/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td> <td align="center">Text-to-Image Generation</td></tr> <tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td> <td><a href="https://huggingface.co/papers/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td> <td align="center">Image Variations Generation</td></tr> <tr><td><a href="./api/pipelines/versatile_diffusion">versatile_diffusion</a></td> <td><a href="https://huggingface.co/papers/2211.08332" rel="nofollow">Versatile Diffusion: Text, Images and Variations All in One Diffusion Model</a></td> <td align="center">Dual Image and Text Guided Generation</td></tr> <tr><td><a href="./api/pipelines/vq_diffusion">vq_diffusion</a></td> <td><a href="https://huggingface.co/papers/2111.14822" rel="nofollow">Vector Quantized Diffusion Model for Text-to-Image Synthesis</a></td> <td align="center">Text-to-Image Generation</td></tr></tbody></table> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/diffusers/blob/main/docs/source/zh/index.md" target="_blank"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M31,16l-7,7l-1.41-1.41L28.17,16l-5.58-5.59L24,9l7,7z"></path><path d="M1,16l7-7l1.41,1.41L3.83,16l5.58,5.59L8,23l-7-7z"></path><path d="M12.419,25.484L17.639,6.552l1.932,0.518L14.351,26.002z"></path></svg> <span data-svelte-h="svelte-zjs2n5"><span class="underline">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_oaf0l2 = { | |
| assets: "/docs/diffusers/pr_12652/zh", | |
| base: "/docs/diffusers/pr_12652/zh", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/diffusers/pr_12652/zh/_app/immutable/entry/start.ca7a833f.js"), | |
| import("/docs/diffusers/pr_12652/zh/_app/immutable/entry/app.746b83f3.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 10], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 22.6 kB
- Xet hash:
- cf661579c20f0e5f91978b552cf409bd3dc03e15a5d77ca820f2e0a8e2e9861c
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.