source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#dynamic-quantization
.md
swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) ``` Apply dynamic quantization: ```python from torchao import apply_dynamic_quant
269_9_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#dynamic-quantization
.md
apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) ``` Finally, compile and perform inference: ```python pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True)
269_9_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#dynamic-quantization
.md
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt, num_inference_steps=30).images[0] ``` Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_5.png" width=500> </div>
269_9_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
270_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
270_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
Diffusers provides many pipelines for basic tasks like generating images, videos, audio, and inpainting. On top of these, there are specialized pipelines for adapters and features like upscaling, super-resolution, and more. Different pipeline classes can even use the same checkpoint because they share the same pretrained model! With so many different pipelines, it can be overwhelming to know which pipeline class to use.
270_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
The [AutoPipeline](../api/pipelines/auto_pipeline) class is designed to simplify the variety of pipelines in Diffusers. It is a generic *task-first* pipeline that lets you focus on a task ([`AutoPipelineForText2Image`], [`AutoPipelineForImage2Image`], and [`AutoPipelineForInpainting`]) without needing to know the specific pipeline class. The [AutoPipeline](../api/pipelines/auto_pipeline) automatically detects the correct pipeline class to use.
270_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
For example, let's use the [dreamlike-art/dreamlike-photoreal-2.0](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0) checkpoint. Under the hood, [AutoPipeline](../api/pipelines/auto_pipeline): 1. Detects a `"stable-diffusion"` class from the [model_index.json](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/model_index.json) file.
270_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
2. Depending on the task you're interested in, it loads the [`StableDiffusionPipeline`], [`StableDiffusionImg2ImgPipeline`], or [`StableDiffusionInpaintPipeline`]. Any parameter (`strength`, `num_inference_steps`, etc.) you would pass to these specific pipelines can also be passed to the [AutoPipeline](../api/pipelines/auto_pipeline). <hfoptions id="autopipeline"> <hfoption id="text-to-image"> ```py from diffusers import AutoPipelineForText2Image import torch
270_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
pipe_txt2img = AutoPipelineForText2Image.from_pretrained( "dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda")
270_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
prompt = "cinematic photo of Godzilla eating sushi with a cat in a izakaya, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(37) image = pipe_txt2img(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-text2img.png"/> </div> </hfoption> <hfoption id="image-to-image"> ```py
270_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
</div> </hfoption> <hfoption id="image-to-image"> ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image import torch
270_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
pipe_img2img = AutoPipelineForImage2Image.from_pretrained( "dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") init_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-text2img.png")
270_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
prompt = "cinematic photo of Godzilla eating burgers with a cat in a fast food restaurant, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(53) image = pipe_img2img(prompt, image=init_image, generator=generator).images[0] image ```
270_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
image = pipe_img2img(prompt, image=init_image, generator=generator).images[0] image ``` Notice how the [dreamlike-art/dreamlike-photoreal-2.0](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0) checkpoint is used for both text-to-image and image-to-image tasks? To save memory and avoid loading the checkpoint twice, use the [`~DiffusionPipeline.from_pipe`] method. ```py pipe_img2img = AutoPipelineForImage2Image.from_pipe(pipe_txt2img).to("cuda")
270_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
```py pipe_img2img = AutoPipelineForImage2Image.from_pipe(pipe_txt2img).to("cuda") image = pipeline(prompt, image=init_image, generator=generator).images[0] image ``` You can learn more about the [`~DiffusionPipeline.from_pipe`] method in the [Reuse a pipeline](../using-diffusers/loading#reuse-a-pipeline) guide. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-img2img.png"/> </div> </hfoption>
270_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
</div> </hfoption> <hfoption id="inpainting"> ```py from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image import torch
270_1_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
pipeline = AutoPipelineForInpainting.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") init_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-img2img.png") mask_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-mask.png")
270_1_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
.md
prompt = "cinematic photo of a owl, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(38) image = pipeline(prompt, image=init_image, mask_image=mask_image, generator=generator, strength=0.4).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-inpaint.png"/> </div> </hfoption> </hfoptions>
270_1_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#unsupported-checkpoints
.md
The [AutoPipeline](../api/pipelines/auto_pipeline) supports [Stable Diffusion](../api/pipelines/stable_diffusion/overview), [Stable Diffusion XL](../api/pipelines/stable_diffusion/stable_diffusion_xl), [ControlNet](../api/pipelines/controlnet), [Kandinsky 2.1](../api/pipelines/kandinsky.md), [Kandinsky 2.2](../api/pipelines/kandinsky_v22), and [DeepFloyd IF](../api/pipelines/deepfloyd_if) checkpoints. If you try to load an unsupported checkpoint, you'll get an error. ```py
270_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#unsupported-checkpoints
.md
If you try to load an unsupported checkpoint, you'll get an error. ```py from diffusers import AutoPipelineForImage2Image import torch
270_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#unsupported-checkpoints
.md
pipeline = AutoPipelineForImage2Image.from_pretrained( "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True ) "ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" ```
270_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
271_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
271_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
A modern diffusion model, like [Stable Diffusion XL (SDXL)](../using-diffusers/sdxl), is not just a single model, but a collection of multiple models. SDXL has four different model-level components: * A variational autoencoder (VAE) * Two text encoders * A UNet for denoising Usually, the text encoders and the denoiser are much larger compared to the VAE.
271_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
* Two text encoders * A UNet for denoising Usually, the text encoders and the denoiser are much larger compared to the VAE. As models get bigger and better, it’s possible your model is so big that even a single copy won’t fit in memory. But that doesn’t mean it can’t be loaded. If you have more than one GPU, there is more memory available to store your model. In this case, it’s better to split your model checkpoint into several smaller *checkpoint shards*.
271_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
When a text encoder checkpoint has multiple shards, like [T5-xxl for SD3](https://huggingface.co./stabilityai/stable-diffusion-3-medium-diffusers/tree/main/text_encoder_3), it is automatically handled by the [Transformers](https://huggingface.co./docs/transformers/index) library as it is a required dependency of Diffusers when using the [`StableDiffusion3Pipeline`]. More specifically, Transformers will automatically handle the loading of multiple shards within the requested model class and get it ready so
271_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
Transformers will automatically handle the loading of multiple shards within the requested model class and get it ready so that inference can be performed.
271_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
The denoiser checkpoint can also have multiple shards and supports inference thanks to the [Accelerate](https://huggingface.co./docs/accelerate/index) library. > [!TIP] > Refer to the [Handling big models for inference](https://huggingface.co./docs/accelerate/main/en/concept_guides/big_model_inference) guide for general guidance when working with big models that are hard to fit into memory.
271_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
For example, let's save a sharded checkpoint for the [SDXL UNet](https://huggingface.co./stabilityai/stable-diffusion-xl-base-1.0/tree/main/unet): ```python from diffusers import UNet2DConditionModel
271_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet" ) unet.save_pretrained("sdxl-unet-sharded", max_shard_size="5GB") ``` The size of the fp32 variant of the SDXL UNet checkpoint is ~10.4GB. Set the `max_shard_size` parameter to 5GB to create 3 shards. After saving, you can load them in [`StableDiffusionXLPipeline`]: ```python from diffusers import UNet2DConditionModel, StableDiffusionXLPipeline import torch
271_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
unet = UNet2DConditionModel.from_pretrained( "sayakpaul/sdxl-unet-sharded", torch_dtype=torch.float16 ) pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 ).to("cuda")
271_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
.md
image = pipeline("a cute dog running on the grass", num_inference_steps=30).images[0] image.save("dog.png") ``` If placing all the model-level components on the GPU at once is not feasible, use [`~DiffusionPipeline.enable_model_cpu_offload`] to help you: ```diff - pipeline.to("cuda") + pipeline.enable_model_cpu_offload() ``` In general, we recommend sharding when a checkpoint is more than 5GB (in fp32).
271_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
On distributed setups, you can run inference across multiple GPUs with Accelerate. > [!WARNING] > This feature is experimental and its APIs might change in the future. With Accelerate, you can use the `device_map` to determine how to distribute the models of a pipeline across multiple devices. This is useful in situations where you have more than one GPU. For example, if you have two 8GB GPUs, then using [`~DiffusionPipeline.enable_model_cpu_offload`] may not work so well because:
271_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
* it only works on a single GPU * a single model might not fit on a single GPU ([`~DiffusionPipeline.enable_sequential_cpu_offload`] might work but it will be extremely slow and it is also limited to a single GPU) To make use of both GPUs, you can use the "balanced" device placement strategy which splits the models across all available GPUs. > [!WARNING] > Only the "balanced" strategy is supported at the moment, and we plan to support additional mapping strategies in the future. ```diff
271_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
```diff from diffusers import DiffusionPipeline import torch
271_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, device_map="balanced" ) image = pipeline("a dog").images[0] image ``` You can also pass a dictionary to enforce the maximum GPU memory that can be used on each device: ```diff from diffusers import DiffusionPipeline import torch
271_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
max_memory = {0:"1GB", 1:"1GB"} pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, device_map="balanced", + max_memory=max_memory ) image = pipeline("a dog").images[0] image ``` If a device is not present in `max_memory`, then it will be completely ignored and will not participate in the device placement.
271_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
By default, Diffusers uses the maximum memory of all devices. If the models don't fit on the GPUs, they are offloaded to the CPU. If the CPU doesn't have enough memory, then you might see an error. In that case, you could defer to using [`~DiffusionPipeline.enable_sequential_cpu_offload`] and [`~DiffusionPipeline.enable_model_cpu_offload`].
271_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
Call [`~DiffusionPipeline.reset_device_map`] to reset the `device_map` of a pipeline. This is also necessary if you want to use methods like `to()`, [`~DiffusionPipeline.enable_sequential_cpu_offload`], and [`~DiffusionPipeline.enable_model_cpu_offload`] on a pipeline that was device-mapped. ```py pipeline.reset_device_map() ``` Once a pipeline has been device-mapped, you can also access its device map via `hf_device_map`: ```py print(pipeline.hf_device_map) ```
271_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
.md
```py print(pipeline.hf_device_map) ``` An example device map would look like so: ```bash {'unet': 1, 'vae': 1, 'safety_checker': 0, 'text_encoder': 0} ```
271_2_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
272_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]]
272_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
There are many adapter types (with [LoRAs](https://huggingface.co./docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images.
272_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
In this tutorial, you'll learn how to easily load and manage adapters for inference with the 🤗 [PEFT](https://huggingface.co./docs/peft/index) integration in 🤗 Diffusers. You'll use LoRA as the main adapter technique, so you'll see the terms LoRA and adapter used interchangeably. Let's first install all the required libraries. ```bash !pip install -q transformers accelerate peft diffusers ```
272_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
Let's first install all the required libraries. ```bash !pip install -q transformers accelerate peft diffusers ``` Now, load a pipeline with a [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) checkpoint: ```python from diffusers import DiffusionPipeline import torch
272_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ```
272_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ``` Next, load a [CiroN2022/toy-face](https://huggingface.co./CiroN2022/toy-face) adapter with the [`~diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] method. With the 🤗 PEFT integration, you can assign a specific `adapter_name` to the checkpoint, which lets you easily switch between different LoRA checkpoints. Let's call this adapter `"toy"`. ```python
272_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
```python pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` Make sure to include the token `toy_face` in the prompt and then you can perform inference: ```python prompt = "toy_face of a hacker with a hoodie"
272_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png)
272_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co./nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images and call it `"pixel"`. The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method: ```python
272_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
```python pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters("pixel") ``` Make sure you include the token `pixel art` in your prompt to generate a pixel art image: ```python prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ```
272_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
.md
).images[0] image ``` ![pixel-art](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png) <Tip> By default, if the most up-to-date versions of PEFT and Transformers are detected, `low_cpu_mem_usage` is set to `True` to speed up the loading time of LoRA checkpoints. </Tip>
272_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged. ```python pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) ``` <Tip>
272_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
``` <Tip> LoRA checkpoints in the diffusion community are almost always obtained with [DreamBooth](https://huggingface.co./docs/diffusers/main/en/training/dreambooth). DreamBooth training often relies on "trigger" words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it's important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. </Tip>
272_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
</Tip> Remember to use the trigger words for [CiroN2022/toy-face](https://hf.co/CiroN2022/toy-face) and [nerijs/pixel-art-xl](https://hf.co/nerijs/pixel-art-xl) (these are found in their repositories) in the prompt to generate an image. ```python prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ```
272_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-pixel-art](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png) Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. > [!TIP]
272_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. > [!TIP] > Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the [Merge LoRAs](../using-diffusers/merge_loras) guide! To return to only using one adapter, use the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method to activate the `"toy"` adapter: ```python pipe.set_adapters("toy")
272_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` Or to disable all adapters entirely, use the [`~loaders.peft.PeftAdapterMixin.disable_lora`] method to return the base model. ```python pipe.disable_lora()
272_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
.md
prompt = "toy_face of a hacker with a hoodie" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![no-lora](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png)
272_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called "scales") to [`~loaders.peft.PeftAdapterMixin.set_adapters`]. For example, here's how you can turn on the adapter for the `down` parts, but turn it off for the `mid` and `up` parts: ```python pipe.enable_lora() # enable lora again, after we disabled it above prompt = "toy_face of a hacker with a hoodie, pixel art"
272_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
pipe.enable_lora() # enable lora again, after we disabled it above prompt = "toy_face of a hacker with a hoodie, pixel art" adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ```
272_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-down](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_down.png) Let's see how turning off the `down` part and turning on the `mid` and `up` part respectively changes the image. ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 1, "up": 0} }
272_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
```python adapter_weight_scales = { "unet": { "down": 0, "mid": 1, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-mid](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mid.png) ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 0, "up": 1} }
272_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
```python adapter_weight_scales = { "unet": { "down": 0, "mid": 0, "up": 1} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-up](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_up.png) Looks cool!
272_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
Looks cool! This is a really powerful feature. You can use it to control the adapter strengths down to per-transformer level. And you can even use it for multiple adapters. ```python adapter_weight_scales_toy = 0.5 adapter_weight_scales_pixel = { "unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9 # "mid" # because, in this example, "mid" is not given, all transformers in the mid part will use the default scale 1.0 "up": {
272_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
"up": { "block_0": 0.6, # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1": [0.4, 0.8, 1.0], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters(["toy", "pixel"], [adapter_weight_scales_toy, adapter_weight_scales_pixel]) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ```
272_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
.md
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-mixed](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mixed.png)
272_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#manage-adapters
.md
You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.StableDiffusionLoraLoaderMixin.get_active_adapters`] method to check the list of active adapters: ```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ```
272_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#manage-adapters
.md
```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ``` You can also get the active adapters of each pipeline component with [`~diffusers.loaders.StableDiffusionLoraLoaderMixin.get_list_adapters`]: ```py list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ```
272_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#manage-adapters
.md
{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ``` The [`~loaders.peft.PeftAdapterMixin.delete_adapters`] function completely removes an adapter and their LoRA layers from a model. ```py pipe.delete_adapters("toy") pipe.get_active_adapters() ["pixel"] ```
272_4_2