source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
0_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
0_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#effective-and-efficient-diffusion
.md
[[open-in-colab]] Getting the [`DiffusionPipeline`] to generate images in a certain style or include what you want can be tricky. Often times, you have to run the [`DiffusionPipeline`] several times before you end up with an image you're happy with. But generating something out of nothing is a computationally intensive process, especially if you're running inference over and over again.
0_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#effective-and-efficient-diffusion
.md
This is why it's important to get the most *computational* (speed) and *memory* (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the [`DiffusionPipeline`]. Begin by loading the [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co./stable-diffusion-v1-5/stable-diffusion-v1-5) model: ```python from diffusers import DiffusionPipeline
0_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#effective-and-efficient-diffusion
.md
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) ``` The example prompt you'll use is a portrait of an old warrior chief, but feel free to use your own prompt: ```python prompt = "portrait photo of a old warrior chief" ```
0_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
<Tip> πŸ’‘ If you don't have access to a GPU, you can use one for free from a GPU provider like [Colab](https://colab.research.google.com/)! </Tip> One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: ```python pipeline = pipeline.to("cuda") ```
0_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
```python pipeline = pipeline.to("cuda") ``` To make sure you can use the same image and improve on it, use a [`Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) and set a seed for [reproducibility](./using-diffusers/reusing_seeds): ```python import torch
0_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
generator = torch.Generator("cuda").manual_seed(0) ``` Now you can generate an image: ```python image = pipeline(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png"> </div>
0_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
<img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png"> </div> This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the [`DiffusionPipeline`] runs inference with full `float32` precision for 50 inference steps. You can speed this up by switching to a lower precision like `float16` or running fewer inference steps.
0_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
Let's start by loading the model in `float16` and generate an image: ```python import torch
0_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) pipeline = pipeline.to("cuda") generator = torch.Generator("cuda").manual_seed(0) image = pipeline(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_2.png"> </div> This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before!
0_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
</div> This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! <Tip> πŸ’‘ We strongly suggest always running your pipelines in `float16`, and so far, we've rarely seen any degradation in output quality. </Tip>
0_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
</Tip> Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the [`DiffusionPipeline`] by calling the `compatibles` method: ```python pipeline.scheduler.compatibles [ diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler,
0_2_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler,
0_2_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, ]
0_2_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
diffusers.schedulers.scheduling_ddim.DDIMScheduler, ] ``` The Stable Diffusion model uses the [`PNDMScheduler`] by default which usually requires ~50 inference steps, but more performant schedulers like [`DPMSolverMultistepScheduler`], require only ~20 or 25 inference steps. Use the [`~ConfigMixin.from_config`] method to load a new scheduler: ```python from diffusers import DPMSolverMultistepScheduler
0_2_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) ``` Now set the `num_inference_steps` to 20: ```python generator = torch.Generator("cuda").manual_seed(0) image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png"> </div>
0_2_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#speed
.md
<img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png"> </div> Great, you've managed to cut the inference time to just 4 seconds! ⚑️
0_2_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you're often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an `OutOfMemoryError` (OOM).
0_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
Create a function that'll generate a batch of images from a list of prompts and `Generators`. Make sure to assign each `Generator` a seed so you can reuse it if it produces a good result. ```python def get_inputs(batch_size=1): generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] prompts = batch_size * [prompt] num_inference_steps = 20
0_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} ``` Start with `batch_size=4` and see how much memory you've consumed: ```python from diffusers.utils import make_image_grid
0_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
images = pipeline(**get_inputs(batch_size=4)).images make_image_grid(images, 2, 2) ``` Unless you have a GPU with more vRAM, the code above probably returned an `OOM` error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the [`~DiffusionPipeline.enable_attention_slicing`] function: ```python
0_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
```python pipeline.enable_attention_slicing() ``` Now try increasing the `batch_size` to 8! ```python images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png"> </div>
0_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#memory
.md
<img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png"> </div> Whereas before you couldn't even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality.
0_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#quality
.md
In the last two sections, you learned how to optimize the speed of your pipeline by using `fp16`, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you're going to focus on how to improve the quality of generated images.
0_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-checkpoints
.md
The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn't automatically mean you'll get better results. You'll still have to experiment with different checkpoints yourself, and do a little research (such as using [negative prompts](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/)) to get the best results.
0_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-checkpoints
.md
As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the [Hub](https://huggingface.co./models?library=diffusers&sort=downloads) and [Diffusers Gallery](https://huggingface.co./spaces/huggingface-projects/diffusers-gallery) to find one you're interested in!
0_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-pipeline-components
.md
You can also try replacing the current pipeline components with a newer version. Let's try loading the latest [autoencoder](https://huggingface.co./stabilityai/stable-diffusion-2-1/tree/main/vae) from Stability AI into the pipeline, and generate some images: ```python from diffusers import AutoencoderKL
0_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-pipeline-components
.md
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") pipeline.vae = vae images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_6.png"> </div>
0_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
The text prompt you use to generate an image is super important, so much so that it is called *prompt engineering*. Some considerations to keep during prompt engineering are: - How is the image or similar images of the one I want to generate stored on the internet? - What additional detail can I give that steers the model towards the style I want? With this in mind, let's improve the prompt to include color and higher quality details: ```python
0_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
With this in mind, let's improve the prompt to include color and higher quality details: ```python prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" ``` Generate a batch of images with the new prompt: ```python images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center">
0_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_7.png"> </div> Pretty impressive! Let's tweak the second image - corresponding to the `Generator` with a seed of `1` - a bit more by adding some text about the age of the subject: ```python prompts = [
0_7_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
```python prompts = [ "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of an old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta",
0_7_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
"portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ]
0_7_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#better-prompt-engineering
.md
generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images make_image_grid(images, 2, 2) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_8.png"> </div>
0_7_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#next-steps
.md
In this tutorial, you learned how to optimize a [`DiffusionPipeline`] for computational and memory efficiency as well as improving the quality of generated outputs. If you're interested in making your pipeline even faster, take a look at the following resources: - Learn how [PyTorch 2.0](./optimization/torch2.0) and [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster!
0_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/stable_diffusion.md
https://huggingface.co./docs/diffusers/en/stable_diffusion/#next-steps
.md
- If you can't use PyTorch 2, we recommend you install [xFormers](./optimization/xformers). Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. - Other optimization techniques, such as model offloading, are covered in [this guide](./optimization/fp16).
0_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
1_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
1_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
Welcome to Community Projects. This space is dedicated to showcasing the incredible work and innovative applications created by our vibrant community using the `diffusers` library. This section aims to: - Highlight diverse and inspiring projects built with `diffusers` - Foster knowledge sharing within our community - Provide real-world examples of how `diffusers` can be leveraged Happy exploring, and thank you for being part of the Diffusers community! <table> <tr> <th>Project Name</th>
1_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
Happy exploring, and thank you for being part of the Diffusers community! <table> <tr> <th>Project Name</th> <th>Description</th> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/carson-katri/dream-textures"> dream-textures </a></td> <td>Stable Diffusion built-in to Blender</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/megvii-research/HiDiffusion"> HiDiffusion </a></td>
1_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
<tr style="border-top: 2px solid black"> <td><a href="https://github.com/megvii-research/HiDiffusion"> HiDiffusion </a></td> <td>Increases the resolution and speed of your diffusion model by only adding a single line of code</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/lllyasviel/IC-Light"> IC-Light </a></td> <td>IC-Light is a project to manipulate the illumination of images</td> </tr> <tr style="border-top: 2px solid black">
1_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
<td>IC-Light is a project to manipulate the illumination of images</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/InstantID/InstantID"> InstantID </a></td> <td>InstantID : Zero-shot Identity-Preserving Generation in Seconds</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/Sanster/IOPaint"> IOPaint </a></td>
1_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
</tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/Sanster/IOPaint"> IOPaint </a></td> <td>Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, people from your pictures or erase and replace(powered by stable diffusion) any thing on your pictures.</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/bmaltais/kohya_ss"> Kohya </a></td> <td>Gradio GUI for Kohya's Stable Diffusion trainers</td> </tr>
1_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
<td>Gradio GUI for Kohya's Stable Diffusion trainers</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/magic-research/magic-animate"> MagicAnimate </a></td> <td>MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/levihsu/OOTDiffusion"> OOTDiffusion </a></td> <td>Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on</td> </tr>
1_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
<td>Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/vladmandic/automatic"> SD.Next </a></td> <td>SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/ashawkey/stable-dreamfusion"> stable-dreamfusion </a></td>
1_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
<td><a href="https://github.com/ashawkey/stable-dreamfusion"> stable-dreamfusion </a></td> <td>Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/HVision-NKU/StoryDiffusion"> StoryDiffusion </a></td> <td>StoryDiffusion can create a magic story by generating consistent images and videos.</td> </tr> <tr style="border-top: 2px solid black">
1_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
</tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/cumulo-autumn/StreamDiffusion"> StreamDiffusion </a></td> <td>A Pipeline-Level Solution for Real-Time Interactive Generation</td> </tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/Netwrck/stable-diffusion-server"> Stable Diffusion Server </a></td> <td>A server configured for Inpainting/Generation/img2img with one stable diffusion model</td> </tr> <tr style="border-top: 2px solid black">
1_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/community_projects.md
https://huggingface.co./docs/diffusers/en/community_projects/#community-projects
.md
</tr> <tr style="border-top: 2px solid black"> <td><a href="https://github.com/suzukimain/auto_diffusers"> Model Search </a></td> <td>Search models on Civitai and Hugging Face</td> </tr> </table>
1_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
2_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]]
2_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone.
2_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
Whether you're a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: * The [`DiffusionPipeline`] is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. * Popular pretrained [model](./api/models) architectures and modules that can be used as building blocks for creating diffusion systems.
2_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
* Many different [schedulers](./api/schedulers/overview) - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the [`DiffusionPipeline`] for inference, and then walk you through how to combine a model and scheduler to replicate what's happening inside the [`DiffusionPipeline`]. <Tip>
2_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
<Tip> The quicktour is a simplified version of the introductory 🧨 Diffusers [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) to help you get started quickly. If you want to learn more about 🧨 Diffusers' goal, design philosophy, and additional details about its core API, check out the notebook! </Tip> Before you begin, make sure you have all the necessary libraries installed: ```py
2_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
</Tip> Before you begin, make sure you have all the necessary libraries installed: ```py # uncomment to install the necessary libraries in Colab #!pip install --upgrade diffusers accelerate transformers ``` - [πŸ€— Accelerate](https://huggingface.co./docs/accelerate/index) speeds up model loading for inference and training.
2_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#quicktour
.md
``` - [πŸ€— Accelerate](https://huggingface.co./docs/accelerate/index) speeds up model loading for inference and training. - [πŸ€— Transformers](https://huggingface.co./docs/transformers/index) is required to run the most popular diffusion models, such as [Stable Diffusion](https://huggingface.co./docs/diffusers/api/pipelines/stable_diffusion/overview).
2_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
The [`DiffusionPipeline`] is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the [`DiffusionPipeline`] out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the [🧨 Diffusers Summary](./api/pipelines/overview#diffusers-summary) table.
2_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
| **Task** | **Description** | **Pipeline** |------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------| | Unconditional Image Generation | generate an image from Gaussian noise | [unconditional_image_generation](./using-diffusers/unconditional_image_generation) |
2_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
| Text-Guided Image Generation | generate an image given a text prompt | [conditional_image_generation](./using-diffusers/conditional_image_generation) | | Text-Guided Image-to-Image Translation | adapt an image guided by a text prompt | [img2img](./using-diffusers/img2img) | | Text-Guided Image-Inpainting | fill the masked part of an image given the image, the mask and a text prompt | [inpaint](./using-diffusers/inpaint) |
2_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
| Text-Guided Depth-to-Image Translation | adapt parts of an image guided by a text prompt while preserving structure via depth estimation | [depth2img](./using-diffusers/depth2img) | Start by creating an instance of a [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download. You can use the [`DiffusionPipeline`] for any [checkpoint](https://huggingface.co./models?library=diffusers&sort=downloads) stored on the Hugging Face Hub.
2_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
In this quicktour, you'll load the [`stable-diffusion-v1-5`](https://huggingface.co./stable-diffusion-v1-5/stable-diffusion-v1-5) checkpoint for text-to-image generation. <Tip warning={true}>
2_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
For [Stable Diffusion](https://huggingface.co./CompVis/stable-diffusion) models, please carefully read the [license](https://huggingface.co./spaces/CompVis/stable-diffusion-license) first before running the model. 🧨 Diffusers implements a [`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) to prevent offensive or harmful content, but the model's improved image generation capabilities can still produce potentially harmful content.
2_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
</Tip> Load the model with the [`~DiffusionPipeline.from_pretrained`] method: ```python >>> from diffusers import DiffusionPipeline
2_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
>>> pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True) ``` The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components. You'll see that the Stable Diffusion pipeline is composed of the [`UNet2DConditionModel`] and [`PNDMScheduler`] among other things: ```py >>> pipeline StableDiffusionPipeline { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.21.4", ..., "scheduler": [
2_2_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
StableDiffusionPipeline { "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.21.4", ..., "scheduler": [ "diffusers", "PNDMScheduler" ], ..., "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] } ``` We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. You can move the generator object to a GPU, just like you would in PyTorch: ```python >>> pipeline.to("cuda") ```
2_2_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
You can move the generator object to a GPU, just like you would in PyTorch: ```python >>> pipeline.to("cuda") ``` Now you can pass a text prompt to the `pipeline` to generate an image, and then access the denoised image. By default, the image output is wrapped in a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object. ```python >>> image = pipeline("An image of a squirrel in Picasso style").images[0] >>> image ```
2_2_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#diffusionpipeline
.md
```python >>> image = pipeline("An image of a squirrel in Picasso style").images[0] >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/image_of_squirrel_painting.png"/> </div> Save the image by calling `save`: ```python >>> image.save("image_of_squirrel_painting.png") ```
2_2_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#local-pipeline
.md
You can also use the pipeline locally. The only difference is you need to download the weights first: ```bash !git lfs install !git clone https://huggingface.co./stable-diffusion-v1-5/stable-diffusion-v1-5 ``` Then load the saved weights into the pipeline: ```python >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) ``` Now, you can run the pipeline as you would in the section above.
2_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#swapping-schedulers
.md
Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default [`PNDMScheduler`] with the [`EulerDiscreteScheduler`], load it with the [`~diffusers.ConfigMixin.from_config`] method: ```py >>> from diffusers import EulerDiscreteScheduler
2_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#swapping-schedulers
.md
>>> pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True) >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) ``` Try generating an image with the new scheduler and see if you notice a difference! In the next section, you'll take a closer look at the components - the model and scheduler - that make up the [`DiffusionPipeline`] and learn how to use these components to generate an image of a cat.
2_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
Most models take a noisy sample, and at each timestep it predicts the *noise residual* (other models learn to predict the previous sample directly or the velocity or [`v-prediction`](https://github.com/huggingface/diffusers/blob/5e5ce13e2f89ac45a0066cb3f369462a3cf1d9ef/src/diffusers/schedulers/scheduling_ddim.py#L110)), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems.
2_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
Models are initiated with the [`~ModelMixin.from_pretrained`] method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you'll load the [`UNet2DModel`], a basic unconditional image generation model with a checkpoint trained on cat images: ```py >>> from diffusers import UNet2DModel
2_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
>>> repo_id = "google/ddpm-cat-256" >>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) ``` To access the model parameters, call `model.config`: ```py >>> model.config ``` The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can't be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference.
2_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
Some of the most important parameters are: * `sample_size`: the height and width dimension of the input sample. * `in_channels`: the number of input channels of the input sample. * `down_block_types` and `up_block_types`: the type of down- and upsampling blocks used to create the UNet architecture. * `block_out_channels`: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks.
2_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
* `layers_per_block`: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a `batch` axis because the model can receive multiple random noises, a `channel` axis corresponding to the number of input channels, and a `sample_size` axis for the height and width of the image: ```py >>> import torch
2_5_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
>>> torch.manual_seed(0)
2_5_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) >>> noisy_sample.shape torch.Size([1, 3, 256, 256]) ```
2_5_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
>>> noisy_sample.shape torch.Size([1, 3, 256, 256]) ``` For inference, pass the noisy image and a `timestep` to the model. The `timestep` indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the `sample` method to get the model output: ```py >>> with torch.no_grad(): ... noisy_residual = model(sample=noisy_sample, timestep=2).sample ```
2_5_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#models
.md
```py >>> with torch.no_grad(): ... noisy_residual = model(sample=noisy_sample, timestep=2).sample ``` To generate actual examples though, you'll need a scheduler to guide the denoising process. In the next section, you'll learn how to couple a model with a scheduler.
2_5_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the `noisy_residual`. <Tip> 🧨 Diffusers is a toolbox for building diffusion systems. While the [`DiffusionPipeline`] is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. </Tip>
2_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
</Tip> For the quicktour, you'll instantiate the [`DDPMScheduler`] with its [`~diffusers.ConfigMixin.from_config`] method: ```py >>> from diffusers import DDPMScheduler
2_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
>>> scheduler = DDPMScheduler.from_pretrained(repo_id) >>> scheduler DDPMScheduler { "_class_name": "DDPMScheduler", "_diffusers_version": "0.21.4", "beta_end": 0.02, "beta_schedule": "linear", "beta_start": 0.0001, "clip_sample": true, "clip_sample_range": 1.0, "dynamic_thresholding_ratio": 0.995, "num_train_timesteps": 1000, "prediction_type": "epsilon", "sample_max_value": 1.0, "steps_offset": 0, "thresholding": false, "timestep_spacing": "leading", "trained_betas": null, "variance_type": "fixed_small"
2_6_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
"steps_offset": 0, "thresholding": false, "timestep_spacing": "leading", "trained_betas": null, "variance_type": "fixed_small" } ``` <Tip> πŸ’‘ Unlike a model, a scheduler does not have trainable weights and is parameter-free! </Tip> Some of the most important parameters are: * `num_train_timesteps`: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample.
2_6_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
* `beta_schedule`: the type of noise schedule to use for inference and training. * `beta_start` and `beta_end`: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler's [`~diffusers.DDPMScheduler.step`] method: model output, `timestep`, and current `sample`. ```py >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample >>> less_noisy_sample.shape
2_6_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
>>> less_noisy_sample.shape torch.Size([1, 3, 256, 256]) ``` The `less_noisy_sample` can be passed to the next `timestep` where it'll get even less noisy! Let's bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a `PIL.Image`: ```py >>> import PIL.Image >>> import numpy as np
2_6_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
>>> def display_sample(sample, i): ... image_processed = sample.cpu().permute(0, 2, 3, 1) ... image_processed = (image_processed + 1.0) * 127.5 ... image_processed = image_processed.numpy().astype(np.uint8)
2_6_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
... image_pil = PIL.Image.fromarray(image_processed[0]) ... display(f"Image at step {i}") ... display(image_pil) ``` To speed up the denoising process, move the input and model to a GPU: ```py >>> model.to("cuda") >>> noisy_sample = noisy_sample.to("cuda") ``` Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: ```py >>> import tqdm >>> sample = noisy_sample
2_6_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
>>> sample = noisy_sample >>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): ... # 1. predict noise residual ... with torch.no_grad(): ... residual = model(sample, t).sample ... # 2. compute less noisy image and set x_t -> x_t-1 ... sample = scheduler.step(residual, t, sample).prev_sample
2_6_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#schedulers
.md
... # 2. compute less noisy image and set x_t -> x_t-1 ... sample = scheduler.step(residual, t, sample).prev_sample ... # 3. optionally look at image ... if (i + 1) % 50 == 0: ... display_sample(sample, i + 1) ``` Sit back and watch as a cat is generated from nothing but noise! 😻 <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/diffusion-quicktour.png"/> </div>
2_6_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#next-steps
.md
Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: * Train or finetune a model to generate your own images in the [training](./tutorials/basic_training) tutorial. * See example official and community [training or finetuning scripts](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples) for a variety of use cases.
2_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#next-steps
.md
* Learn more about loading, accessing, changing, and comparing schedulers in the [Using different Schedulers](./using-diffusers/schedulers) guide. * Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the [Stable Diffusion](./stable_diffusion) guide.
2_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/quicktour.md
https://huggingface.co./docs/diffusers/en/quicktour/#next-steps
.md
* Dive deeper into speeding up 🧨 Diffusers with guides on [optimized PyTorch on a GPU](./optimization/fp16), and inference guides for running [Stable Diffusion on Apple Silicon (M1/M2)](./optimization/mps) and [ONNX Runtime](./optimization/onnx).
2_7_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
3_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/diffusers/77aadfee6a891ab9fcfb780f87c693f7a5beeb8e/docs/source/imgs/diffusers_library.jpg" width="400"/> <br> </p>
3_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
πŸ€— Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or want to train your own diffusion model, πŸ€— Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](conceptual/philosophy#usability-over-performance), [simple over easy](conceptual/philosophy#simple-over-easy), and [customizability over
3_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
[simple over easy](conceptual/philosophy#simple-over-easy), and [customizability over abstractions](conceptual/philosophy#tweakable-contributorfriendly-over-abstraction).
3_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
The library has three main components: - State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in πŸ€— Diffusers, check out the table in the pipeline [overview](api/pipelines/overview) for a complete list of available pipelines and the task they solve. - Interchangeable [noise schedulers](api/schedulers/overview) for balancing trade-offs between generation speed and quality.
3_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
- Interchangeable [noise schedulers](api/schedulers/overview) for balancing trade-offs between generation speed and quality. - Pretrained [models](api/models) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
3_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
<div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/tutorial_overview" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
3_1_4