text
stringlengths
3
14.4k
source
stringclasses
273 values
url
stringlengths
47
172
source_section
stringlengths
0
95
file_type
stringclasses
1 value
id
stringlengths
3
6
EulerAncestralDiscreteSchedulerOutput Output class for the scheduler's `step` function output. Args: prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the denoising loop. pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): The predicted denoised sample `(x_{0})` based on the model output from the current timestep. `pred_original_sample` can be used to preview progress or for guidance.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md
https://huggingface.co./docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduleroutput
#eulerancestraldiscretescheduleroutput
.md
265_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md
https://huggingface.co./docs/diffusers/en/api/schedulers/score_sde_ve/
.md
266_0
`ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the [Score-Based Generative Modeling through Stochastic Differential Equations](https://huggingface.co./papers/2011.13456) paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: *Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.*
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md
https://huggingface.co./docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler
#scoresdevescheduler
.md
266_1
ScoreSdeVeScheduler `ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving. Args: num_train_timesteps (`int`, defaults to 1000): The number of diffusion steps to train the model. snr (`float`, defaults to 0.15): A coefficient weighting the step from the `model_output` sample (from the network) to the random noise. sigma_min (`float`, defaults to 0.01): The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror the distribution of the data. sigma_max (`float`, defaults to 1348.0): The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (`float`, defaults to 1e-5): The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (`int`, defaults to 1): The number of correction steps performed on a produced sample.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md
https://huggingface.co./docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler
#scoresdevescheduler
.md
266_2
SdeVeOutput Output class for the scheduler's `step` function output. Args: prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the denoising loop. prev_sample_mean (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): Mean averaged `prev_sample` over previous timesteps.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md
https://huggingface.co./docs/diffusers/en/api/schedulers/score_sde_ve/#sdeveoutput
#sdeveoutput
.md
266_3
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]]
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/
.md
267_0
Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the [Hub](https://huggingface.co./search/full-text?q=unconditional-image-generation&type=model), but if you can't find one you like, you can always train your own! This tutorial will teach you how to train a [`UNet2DModel`] from scratch on a subset of the [Smithsonian Butterflies](https://huggingface.co./datasets/huggan/smithsonian_butterflies_subset) dataset to generate your own 🦋 butterflies 🦋. <Tip> 💡 This training tutorial is based on the [Training with 🧨 Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook. For additional details and context about diffusion models like how they work, check out the notebook! </Tip> Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training). ```py # uncomment to install the necessary libraries in Colab #!pip install diffusers[training] ``` We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` Or login in from the terminal: ```bash huggingface-cli login ``` Since the model checkpoints are quite large, install [Git-LFS](https://git-lfs.com/) to version these large files: ```bash !sudo apt -qq install git-lfs !git config --global credential.helper store ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#train-a-diffusion-model
#train-a-diffusion-model
.md
267_1
For convenience, create a `TrainingConfig` class containing the training hyperparameters (feel free to adjust them): ```py >>> from dataclasses import dataclass >>> @dataclass ... class TrainingConfig: ... image_size = 128 # the generated image resolution ... train_batch_size = 16 ... eval_batch_size = 16 # how many images to sample during evaluation ... num_epochs = 50 ... gradient_accumulation_steps = 1 ... learning_rate = 1e-4 ... lr_warmup_steps = 500 ... save_image_epochs = 10 ... save_model_epochs = 30 ... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision ... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub ... push_to_hub = True # whether to upload the saved model to the HF Hub ... hub_model_id = "<your-username>/<my-awesome-model>" # the name of the repository to create on the HF Hub ... hub_private_repo = None ... overwrite_output_dir = True # overwrite the old model when re-running the notebook ... seed = 0 >>> config = TrainingConfig() ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#training-configuration
#training-configuration
.md
267_2
You can easily load the [Smithsonian Butterflies](https://huggingface.co./datasets/huggan/smithsonian_butterflies_subset) dataset with the 🤗 Datasets library: ```py >>> from datasets import load_dataset >>> config.dataset_name = "huggan/smithsonian_butterflies_subset" >>> dataset = load_dataset(config.dataset_name, split="train") ``` <Tip> 💡 You can find additional datasets from the [HugGan Community Event](https://huggingface.co./huggan) or you can use your own dataset by creating a local [`ImageFolder`](https://huggingface.co./docs/datasets/image_dataset#imagefolder). Set `config.dataset_name` to the repository id of the dataset if it is from the HugGan Community Event, or `imagefolder` if you're using your own images. </Tip> 🤗 Datasets uses the [`~datasets.Image`] feature to automatically decode the image data and load it as a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html) which we can visualize: ```py >>> import matplotlib.pyplot as plt >>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) >>> for i, image in enumerate(dataset[:4]["image"]): ... axs[i].imshow(image) ... axs[i].set_axis_off() >>> fig.show() ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_ds.png"/> </div> The images are all different sizes though, so you'll need to preprocess them first: * `Resize` changes the image size to the one defined in `config.image_size`. * `RandomHorizontalFlip` augments the dataset by randomly mirroring the images. * `Normalize` is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. ```py >>> from torchvision import transforms >>> preprocess = transforms.Compose( ... [ ... transforms.Resize((config.image_size, config.image_size)), ... transforms.RandomHorizontalFlip(), ... transforms.ToTensor(), ... transforms.Normalize([0.5], [0.5]), ... ] ... ) ``` Use 🤗 Datasets' [`~datasets.Dataset.set_transform`] method to apply the `preprocess` function on the fly during training: ```py >>> def transform(examples): ... images = [preprocess(image.convert("RGB")) for image in examples["image"]] ... return {"images": images} >>> dataset.set_transform(transform) ``` Feel free to visualize the images again to confirm that they've been resized. Now you're ready to wrap the dataset in a [DataLoader](https://pytorch.org/docs/stable/data#torch.utils.data.DataLoader) for training! ```py >>> import torch >>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#load-the-dataset
#load-the-dataset
.md
267_3
Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a [`UNet2DModel`]: ```py >>> from diffusers import UNet2DModel >>> model = UNet2DModel( ... sample_size=config.image_size, # the target image resolution ... in_channels=3, # the number of input channels, 3 for RGB images ... out_channels=3, # the number of output channels ... layers_per_block=2, # how many ResNet layers to use per UNet block ... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block ... down_block_types=( ... "DownBlock2D", # a regular ResNet downsampling block ... "DownBlock2D", ... "DownBlock2D", ... "DownBlock2D", ... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention ... "DownBlock2D", ... ), ... up_block_types=( ... "UpBlock2D", # a regular ResNet upsampling block ... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention ... "UpBlock2D", ... "UpBlock2D", ... "UpBlock2D", ... "UpBlock2D", ... ), ... ) ``` It is often a good idea to quickly check the sample image shape matches the model output shape: ```py >>> sample_image = dataset[0]["images"].unsqueeze(0) >>> print("Input shape:", sample_image.shape) Input shape: torch.Size([1, 3, 128, 128]) >>> print("Output shape:", model(sample_image, timestep=0).sample.shape) Output shape: torch.Size([1, 3, 128, 128]) ``` Great! Next, you'll need a scheduler to add some noise to the image.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#create-a-unet2dmodel
#create-a-unet2dmodel
.md
267_4
The scheduler behaves differently depending on whether you're using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a *noise schedule* and an *update rule*. Let's take a look at the [`DDPMScheduler`] and use the `add_noise` method to add some random noise to the `sample_image` from before: ```py >>> import torch >>> from PIL import Image >>> from diffusers import DDPMScheduler >>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) >>> noise = torch.randn(sample_image.shape) >>> timesteps = torch.LongTensor([50]) >>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) >>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/noisy_butterfly.png"/> </div> The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: ```py >>> import torch.nn.functional as F >>> noise_pred = model(noisy_image, timesteps).sample >>> loss = F.mse_loss(noise_pred, noise) ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#create-a-scheduler
#create-a-scheduler
.md
267_5
By now, you have most of the pieces to start training the model and all that's left is putting everything together. First, you'll need an optimizer and a learning rate scheduler: ```py >>> from diffusers.optimization import get_cosine_schedule_with_warmup >>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) >>> lr_scheduler = get_cosine_schedule_with_warmup( ... optimizer=optimizer, ... num_warmup_steps=config.lr_warmup_steps, ... num_training_steps=(len(train_dataloader) * config.num_epochs), ... ) ``` Then, you'll need a way to evaluate the model. For evaluation, you can use the [`DDPMPipeline`] to generate a batch of sample images and save it as a grid: ```py >>> from diffusers import DDPMPipeline >>> from diffusers.utils import make_image_grid >>> import os >>> def evaluate(config, epoch, pipeline): ... # Sample some images from random noise (this is the backward diffusion process). ... # The default pipeline output type is `List[PIL.Image]` ... images = pipeline( ... batch_size=config.eval_batch_size, ... generator=torch.Generator(device='cpu').manual_seed(config.seed), # Use a separate torch generator to avoid rewinding the random state of the main training loop ... ).images ... # Make a grid out of the images ... image_grid = make_image_grid(images, rows=4, cols=4) ... # Save the images ... test_dir = os.path.join(config.output_dir, "samples") ... os.makedirs(test_dir, exist_ok=True) ... image_grid.save(f"{test_dir}/{epoch:04d}.png") ``` Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. <Tip> 💡 The training loop below may look intimidating and long, but it'll be worth it later when you launch your training in just one line of code! If you can't wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you're waiting for your model to finish training. 🤗 </Tip> ```py >>> from accelerate import Accelerator >>> from huggingface_hub import create_repo, upload_folder >>> from tqdm.auto import tqdm >>> from pathlib import Path >>> import os >>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): ... # Initialize accelerator and tensorboard logging ... accelerator = Accelerator( ... mixed_precision=config.mixed_precision, ... gradient_accumulation_steps=config.gradient_accumulation_steps, ... log_with="tensorboard", ... project_dir=os.path.join(config.output_dir, "logs"), ... ) ... if accelerator.is_main_process: ... if config.output_dir is not None: ... os.makedirs(config.output_dir, exist_ok=True) ... if config.push_to_hub: ... repo_id = create_repo( ... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True ... ).repo_id ... accelerator.init_trackers("train_example") ... # Prepare everything ... # There is no specific order to remember, you just need to unpack the ... # objects in the same order you gave them to the prepare method. ... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( ... model, optimizer, train_dataloader, lr_scheduler ... ) ... global_step = 0 ... # Now you train the model ... for epoch in range(config.num_epochs): ... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) ... progress_bar.set_description(f"Epoch {epoch}") ... for step, batch in enumerate(train_dataloader): ... clean_images = batch["images"] ... # Sample noise to add to the images ... noise = torch.randn(clean_images.shape, device=clean_images.device) ... bs = clean_images.shape[0] ... # Sample a random timestep for each image ... timesteps = torch.randint( ... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, ... dtype=torch.int64 ... ) ... # Add noise to the clean images according to the noise magnitude at each timestep ... # (this is the forward diffusion process) ... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) ... with accelerator.accumulate(model): ... # Predict the noise residual ... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] ... loss = F.mse_loss(noise_pred, noise) ... accelerator.backward(loss) ... if accelerator.sync_gradients: ... accelerator.clip_grad_norm_(model.parameters(), 1.0) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} ... progress_bar.set_postfix(**logs) ... accelerator.log(logs, step=global_step) ... global_step += 1 ... # After each epoch you optionally sample some demo images with evaluate() and save the model ... if accelerator.is_main_process: ... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) ... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: ... evaluate(config, epoch, pipeline) ... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: ... if config.push_to_hub: ... upload_folder( ... repo_id=repo_id, ... folder_path=config.output_dir, ... commit_message=f"Epoch {epoch}", ... ignore_patterns=["step_*", "epoch_*"], ... ) ... else: ... pipeline.save_pretrained(config.output_dir) ``` Phew, that was quite a bit of code! But you're finally ready to launch the training with 🤗 Accelerate's [`~accelerate.notebook_launcher`] function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: ```py >>> from accelerate import notebook_launcher >>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) >>> notebook_launcher(train_loop, args, num_processes=1) ``` Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! ```py >>> import glob >>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) >>> Image.open(sample_images[-1]) ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/butterflies_final.png"/> </div>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#train-the-model
#train-the-model
.md
267_6
Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the [🧨 Diffusers Training Examples](../training/overview) page. Here are some examples of what you can learn: * [Textual Inversion](../training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image. * [DreamBooth](../training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. * [Guide](../training/text2image) to finetuning a Stable Diffusion model on your own dataset. * [Guide](../training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/basic_training.md
https://huggingface.co./docs/diffusers/en/tutorials/basic_training/#next-steps
#next-steps
.md
267_7
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/tutorial_overview.md
https://huggingface.co./docs/diffusers/en/tutorials/tutorial_overview/
.md
268_0
Welcome to 🧨 Diffusers! If you're new to diffusion models and generative AI, and want to learn more, then you've come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You'll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you'll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you'll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on [Discord](https://discord.com/invite/JfAtkvEtRb) or the [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) to connect and collaborate with other users and developers! Let's start diffusing! 🧨
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/tutorial_overview.md
https://huggingface.co./docs/diffusers/en/tutorials/tutorial_overview/#overview
#overview
.md
268_1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/
.md
269_0
Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation ([LCM LoRA](../using-diffusers/inference_with_lcm_lora)), model compression ([SSD-1B](https://huggingface.co./segmind/SSD-1B)), and reusing adjacent features of the denoiser ([DeepCache](../optimization/deepcache)). However, you don't necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You'll use the [Stable Diffusion XL (SDXL)](../using-diffusers/sdxl) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you're using the latest version of Diffusers: ```bash pip install -U diffusers ``` Then upgrade the other required libraries too: ```bash pip install -U transformers accelerate peft ``` Install [PyTorch nightly](https://pytorch.org/) to benefit from the latest and fastest kernels: ```bash pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 ``` > [!TIP] > The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. > If you're interested in the full benchmarking code, take a look at [huggingface/diffusion-fast](https://github.com/huggingface/diffusion-fast).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#accelerate-inference-of-text-to-image-diffusion-models
#accelerate-inference-of-text-to-image-diffusion-models
.md
269_1
Let's start with a baseline. Disable reduced precision and the [`scaled_dot_product_attention` (SDPA)](../optimization/torch2.0#scaled-dot-product-attention) function which is automatically used by Diffusers: ```python from diffusers import StableDiffusionXLPipeline # Load the pipeline in full-precision and place its model components on CUDA. pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0" ).to("cuda") # Run the attention ops without SDPA. pipe.unet.set_default_attn_processor() pipe.vae.set_default_attn_processor() prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt, num_inference_steps=30).images[0] ``` This default setup takes 7.36 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_0.png" width=500> </div>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#baseline
#baseline
.md
269_2
Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: * Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. * The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. * bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library ([torchao](https://github.com/pytorch-labs/ao)) we used don't have numerical issues with float16. ```python from diffusers import StableDiffusionXLPipeline import torch pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 ).to("cuda") # Run the attention ops without SDPA. pipe.unet.set_default_attn_processor() pipe.vae.set_default_attn_processor() prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt, num_inference_steps=30).images[0] ``` bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_1.png" width=500> </div> <Tip> In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. </Tip> Take a look at the [Speed up inference](../optimization/fp16) guide to learn more about running inference with reduced precision.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#bfloat16
#bfloat16
.md
269_3
Attention blocks are intensive to run. But with PyTorch's [`scaled_dot_product_attention`](../optimization/torch2.0#scaled-dot-product-attention) function, it is a lot more efficient. This function is used by default in Diffusers so you don't need to make any changes to the code. ```python from diffusers import StableDiffusionXLPipeline import torch pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 ).to("cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt, num_inference_steps=30).images[0] ``` Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_2.png" width=500> </div>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#sdpa
#sdpa
.md
269_4
PyTorch 2 includes `torch.compile` which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the [full list](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py) for more options): ```python from diffusers import StableDiffusionXLPipeline import torch torch._inductor.config.conv_1x1_as_mm = True torch._inductor.config.coordinate_descent_tuning = True torch._inductor.config.epilogue_fusion = False torch._inductor.config.coordinate_descent_check_all_directions = True ``` It is also important to change the UNet and VAE's memory layout to "channels_last" when compiling them to ensure maximum speed. ```python pipe.unet.to(memory_format=torch.channels_last) pipe.vae.to(memory_format=torch.channels_last) ``` Now compile and perform inference: ```python # Compile the UNet and VAE. pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" # First call to `pipe` is slow, subsequent ones are faster. image = pipe(prompt, num_inference_steps=30).images[0] ``` `torch.compile` offers different backends and modes. For maximum inference speed, use "max-autotune" for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_3.png" width=500> </div> > [!TIP] > From PyTorch 2.3.1, you can control the caching behavior of `torch.compile()`. This is particularly beneficial for compilation modes like `"max-autotune"` which performs a grid-search over several compilation flags to find the optimal configuration. Learn more in the [Compile Time Caching in torch.compile](https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html) tutorial.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#torchcompile
#torchcompile
.md
269_5
Specifying `fullgraph=True` ensures there are no graph breaks in the underlying model to take full advantage of `torch.compile` without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. ```diff - latents = unet( - latents, timestep=timestep, encoder_hidden_states=prompt_embeds -).sample + latents = unet( + latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False +)[0] ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#prevent-graph-breaks
#prevent-graph-breaks
.md
269_6
During the iterative reverse diffusion process, the `step()` function is [called](https://github.com/huggingface/diffusers/blob/1d686bac8146037e97f3fd8c56e4063230f71751/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py#L1228) on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside `step()`, the `sigmas` variable is [indexed](https://github.com/huggingface/diffusers/blob/1d686bac8146037e97f3fd8c56e4063230f71751/src/diffusers/schedulers/scheduling_euler_discrete.py#L476) which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the `sigmas` array always [stays on the CPU](https://github.com/huggingface/diffusers/blob/35a969d297cba69110d175ee79c59312b9f49e1e/src/diffusers/schedulers/scheduling_euler_discrete.py#L240), the CPU and GPU sync doesn’t occur and you don't get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency.
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#remove-gpu-sync-after-compilation
#remove-gpu-sync-after-compilation
.md
269_7
The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: ```python pipe.fuse_qkv_projections() ``` This provides a minor improvement from 2.54 seconds to 2.52 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_4.png" width=500> </div> <Tip warning={true}> Support for [`~StableDiffusionXLPipeline.fuse_qkv_projections`] is limited and experimental. It's not available for many non-Stable Diffusion pipelines such as [Kandinsky](../using-diffusers/kandinsky). You can refer to this [PR](https://github.com/huggingface/diffusers/pull/6179) to get an idea about how to enable this for the other pipelines. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#combine-the-attention-blocks-projection-matrices
#combine-the-attention-blocks-projection-matrices
.md
269_8
You can also use the ultra-lightweight PyTorch quantization library, [torchao](https://github.com/pytorch-labs/ao) (commit SHA `54bcd5a10d0abbe7b0c045052029257099f83fd9`), to apply [dynamic int8 quantization](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html) to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: ```python from diffusers import StableDiffusionXLPipeline import torch # Notice the two new flags at the end. torch._inductor.config.conv_1x1_as_mm = True torch._inductor.config.coordinate_descent_tuning = True torch._inductor.config.epilogue_fusion = False torch._inductor.config.coordinate_descent_check_all_directions = True torch._inductor.config.force_fuse_int_mm_with_mul = True torch._inductor.config.use_mixed_mm = True ``` Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the [`dynamic_quant_filter_fn`](https://github.com/huggingface/diffusion-fast/blob/0f169640b1db106fe6a479f78c1ed3bfaeba3386/utils/pipeline_utils.py#L16) shown below. ```python def dynamic_quant_filter_fn(mod, *args): return ( isinstance(mod, torch.nn.Linear) and mod.in_features > 16 and (mod.in_features, mod.out_features) not in [ (1280, 640), (1920, 1280), (1920, 640), (2048, 1280), (2048, 2560), (2560, 1280), (256, 128), (2816, 1280), (320, 640), (512, 1536), (512, 256), (512, 512), (640, 1280), (640, 1920), (640, 320), (640, 5120), (640, 640), (960, 320), (960, 640), ] ) def conv_filter_fn(mod, *args): return ( isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] ) ``` Finally, apply all the optimizations discussed so far: ```python # SDPA + bfloat16. pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 ).to("cuda") # Combine attention projection matrices. pipe.fuse_qkv_projections() # Change the memory layout. pipe.unet.to(memory_format=torch.channels_last) pipe.vae.to(memory_format=torch.channels_last) ``` Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. ```python from torchao import swap_conv2d_1x1_to_linear swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) ``` Apply dynamic quantization: ```python from torchao import apply_dynamic_quant apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) ``` Finally, compile and perform inference: ```python pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt, num_inference_steps=30).images[0] ``` Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/sayakpaul/sample-datasets/resolve/main/progressive-acceleration-sdxl/SDXL%2C_Batch_Size%3A_1%2C_Steps%3A_30_5.png" width=500> </div>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/fast_diffusion.md
https://huggingface.co./docs/diffusers/en/tutorials/fast_diffusion/#dynamic-quantization
#dynamic-quantization
.md
269_9
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/
.md
270_0
Diffusers provides many pipelines for basic tasks like generating images, videos, audio, and inpainting. On top of these, there are specialized pipelines for adapters and features like upscaling, super-resolution, and more. Different pipeline classes can even use the same checkpoint because they share the same pretrained model! With so many different pipelines, it can be overwhelming to know which pipeline class to use. The [AutoPipeline](../api/pipelines/auto_pipeline) class is designed to simplify the variety of pipelines in Diffusers. It is a generic *task-first* pipeline that lets you focus on a task ([`AutoPipelineForText2Image`], [`AutoPipelineForImage2Image`], and [`AutoPipelineForInpainting`]) without needing to know the specific pipeline class. The [AutoPipeline](../api/pipelines/auto_pipeline) automatically detects the correct pipeline class to use. For example, let's use the [dreamlike-art/dreamlike-photoreal-2.0](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0) checkpoint. Under the hood, [AutoPipeline](../api/pipelines/auto_pipeline): 1. Detects a `"stable-diffusion"` class from the [model_index.json](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/model_index.json) file. 2. Depending on the task you're interested in, it loads the [`StableDiffusionPipeline`], [`StableDiffusionImg2ImgPipeline`], or [`StableDiffusionInpaintPipeline`]. Any parameter (`strength`, `num_inference_steps`, etc.) you would pass to these specific pipelines can also be passed to the [AutoPipeline](../api/pipelines/auto_pipeline). <hfoptions id="autopipeline"> <hfoption id="text-to-image"> ```py from diffusers import AutoPipelineForText2Image import torch pipe_txt2img = AutoPipelineForText2Image.from_pretrained( "dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") prompt = "cinematic photo of Godzilla eating sushi with a cat in a izakaya, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(37) image = pipe_txt2img(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-text2img.png"/> </div> </hfoption> <hfoption id="image-to-image"> ```py from diffusers import AutoPipelineForImage2Image from diffusers.utils import load_image import torch pipe_img2img = AutoPipelineForImage2Image.from_pretrained( "dreamlike-art/dreamlike-photoreal-2.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") init_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-text2img.png") prompt = "cinematic photo of Godzilla eating burgers with a cat in a fast food restaurant, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(53) image = pipe_img2img(prompt, image=init_image, generator=generator).images[0] image ``` Notice how the [dreamlike-art/dreamlike-photoreal-2.0](https://hf.co/dreamlike-art/dreamlike-photoreal-2.0) checkpoint is used for both text-to-image and image-to-image tasks? To save memory and avoid loading the checkpoint twice, use the [`~DiffusionPipeline.from_pipe`] method. ```py pipe_img2img = AutoPipelineForImage2Image.from_pipe(pipe_txt2img).to("cuda") image = pipeline(prompt, image=init_image, generator=generator).images[0] image ``` You can learn more about the [`~DiffusionPipeline.from_pipe`] method in the [Reuse a pipeline](../using-diffusers/loading#reuse-a-pipeline) guide. <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-img2img.png"/> </div> </hfoption> <hfoption id="inpainting"> ```py from diffusers import AutoPipelineForInpainting from diffusers.utils import load_image import torch pipeline = AutoPipelineForInpainting.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True ).to("cuda") init_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-img2img.png") mask_image = load_image("https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-mask.png") prompt = "cinematic photo of a owl, 35mm photograph, film, professional, 4k, highly detailed" generator = torch.Generator(device="cpu").manual_seed(38) image = pipeline(prompt, image=init_image, mask_image=mask_image, generator=generator, strength=0.4).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/autopipeline-inpaint.png"/> </div> </hfoption> </hfoptions>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#autopipeline
#autopipeline
.md
270_1
The [AutoPipeline](../api/pipelines/auto_pipeline) supports [Stable Diffusion](../api/pipelines/stable_diffusion/overview), [Stable Diffusion XL](../api/pipelines/stable_diffusion/stable_diffusion_xl), [ControlNet](../api/pipelines/controlnet), [Kandinsky 2.1](../api/pipelines/kandinsky.md), [Kandinsky 2.2](../api/pipelines/kandinsky_v22), and [DeepFloyd IF](../api/pipelines/deepfloyd_if) checkpoints. If you try to load an unsupported checkpoint, you'll get an error. ```py from diffusers import AutoPipelineForImage2Image import torch pipeline = AutoPipelineForImage2Image.from_pretrained( "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True ) "ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/autopipeline.md
https://huggingface.co./docs/diffusers/en/tutorials/autopipeline/#unsupported-checkpoints
#unsupported-checkpoints
.md
270_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/
.md
271_0
A modern diffusion model, like [Stable Diffusion XL (SDXL)](../using-diffusers/sdxl), is not just a single model, but a collection of multiple models. SDXL has four different model-level components: * A variational autoencoder (VAE) * Two text encoders * A UNet for denoising Usually, the text encoders and the denoiser are much larger compared to the VAE. As models get bigger and better, it’s possible your model is so big that even a single copy won’t fit in memory. But that doesn’t mean it can’t be loaded. If you have more than one GPU, there is more memory available to store your model. In this case, it’s better to split your model checkpoint into several smaller *checkpoint shards*. When a text encoder checkpoint has multiple shards, like [T5-xxl for SD3](https://huggingface.co./stabilityai/stable-diffusion-3-medium-diffusers/tree/main/text_encoder_3), it is automatically handled by the [Transformers](https://huggingface.co./docs/transformers/index) library as it is a required dependency of Diffusers when using the [`StableDiffusion3Pipeline`]. More specifically, Transformers will automatically handle the loading of multiple shards within the requested model class and get it ready so that inference can be performed. The denoiser checkpoint can also have multiple shards and supports inference thanks to the [Accelerate](https://huggingface.co./docs/accelerate/index) library. > [!TIP] > Refer to the [Handling big models for inference](https://huggingface.co./docs/accelerate/main/en/concept_guides/big_model_inference) guide for general guidance when working with big models that are hard to fit into memory. For example, let's save a sharded checkpoint for the [SDXL UNet](https://huggingface.co./stabilityai/stable-diffusion-xl-base-1.0/tree/main/unet): ```python from diffusers import UNet2DConditionModel unet = UNet2DConditionModel.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet" ) unet.save_pretrained("sdxl-unet-sharded", max_shard_size="5GB") ``` The size of the fp32 variant of the SDXL UNet checkpoint is ~10.4GB. Set the `max_shard_size` parameter to 5GB to create 3 shards. After saving, you can load them in [`StableDiffusionXLPipeline`]: ```python from diffusers import UNet2DConditionModel, StableDiffusionXLPipeline import torch unet = UNet2DConditionModel.from_pretrained( "sayakpaul/sdxl-unet-sharded", torch_dtype=torch.float16 ) pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 ).to("cuda") image = pipeline("a cute dog running on the grass", num_inference_steps=30).images[0] image.save("dog.png") ``` If placing all the model-level components on the GPU at once is not feasible, use [`~DiffusionPipeline.enable_model_cpu_offload`] to help you: ```diff - pipeline.to("cuda") + pipeline.enable_model_cpu_offload() ``` In general, we recommend sharding when a checkpoint is more than 5GB (in fp32).
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#working-with-big-models
#working-with-big-models
.md
271_1
On distributed setups, you can run inference across multiple GPUs with Accelerate. > [!WARNING] > This feature is experimental and its APIs might change in the future. With Accelerate, you can use the `device_map` to determine how to distribute the models of a pipeline across multiple devices. This is useful in situations where you have more than one GPU. For example, if you have two 8GB GPUs, then using [`~DiffusionPipeline.enable_model_cpu_offload`] may not work so well because: * it only works on a single GPU * a single model might not fit on a single GPU ([`~DiffusionPipeline.enable_sequential_cpu_offload`] might work but it will be extremely slow and it is also limited to a single GPU) To make use of both GPUs, you can use the "balanced" device placement strategy which splits the models across all available GPUs. > [!WARNING] > Only the "balanced" strategy is supported at the moment, and we plan to support additional mapping strategies in the future. ```diff from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, device_map="balanced" ) image = pipeline("a dog").images[0] image ``` You can also pass a dictionary to enforce the maximum GPU memory that can be used on each device: ```diff from diffusers import DiffusionPipeline import torch max_memory = {0:"1GB", 1:"1GB"} pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, device_map="balanced", + max_memory=max_memory ) image = pipeline("a dog").images[0] image ``` If a device is not present in `max_memory`, then it will be completely ignored and will not participate in the device placement. By default, Diffusers uses the maximum memory of all devices. If the models don't fit on the GPUs, they are offloaded to the CPU. If the CPU doesn't have enough memory, then you might see an error. In that case, you could defer to using [`~DiffusionPipeline.enable_sequential_cpu_offload`] and [`~DiffusionPipeline.enable_model_cpu_offload`]. Call [`~DiffusionPipeline.reset_device_map`] to reset the `device_map` of a pipeline. This is also necessary if you want to use methods like `to()`, [`~DiffusionPipeline.enable_sequential_cpu_offload`], and [`~DiffusionPipeline.enable_model_cpu_offload`] on a pipeline that was device-mapped. ```py pipeline.reset_device_map() ``` Once a pipeline has been device-mapped, you can also access its device map via `hf_device_map`: ```py print(pipeline.hf_device_map) ``` An example device map would look like so: ```bash {'unet': 1, 'vae': 1, 'safety_checker': 0, 'text_encoder': 0} ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/inference_with_big_models.md
https://huggingface.co./docs/diffusers/en/tutorials/inference_with_big_models/#device-placement
#device-placement
.md
271_2
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> [[open-in-colab]]
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/
.md
272_0
There are many adapter types (with [LoRAs](https://huggingface.co./docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you'll learn how to easily load and manage adapters for inference with the 🤗 [PEFT](https://huggingface.co./docs/peft/index) integration in 🤗 Diffusers. You'll use LoRA as the main adapter technique, so you'll see the terms LoRA and adapter used interchangeably. Let's first install all the required libraries. ```bash !pip install -q transformers accelerate peft diffusers ``` Now, load a pipeline with a [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) checkpoint: ```python from diffusers import DiffusionPipeline import torch pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") ``` Next, load a [CiroN2022/toy-face](https://huggingface.co./CiroN2022/toy-face) adapter with the [`~diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights`] method. With the 🤗 PEFT integration, you can assign a specific `adapter_name` to the checkpoint, which lets you easily switch between different LoRA checkpoints. Let's call this adapter `"toy"`. ```python pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") ``` Make sure to include the token `toy_face` in the prompt and then you can perform inference: ```python prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png) With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co./nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images and call it `"pixel"`. The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method: ```python pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") pipe.set_adapters("pixel") ``` Make sure you include the token `pixel art` in your prompt to generate a pixel art image: ```python prompt = "a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` ![pixel-art](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png) <Tip> By default, if the most up-to-date versions of PEFT and Transformers are detected, `low_cpu_mem_usage` is set to `True` to speed up the loading time of LoRA checkpoints. </Tip>
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#load-loras-for-inference
#load-loras-for-inference
.md
272_1
You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged. ```python pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) ``` <Tip> LoRA checkpoints in the diffusion community are almost always obtained with [DreamBooth](https://huggingface.co./docs/diffusers/main/en/training/dreambooth). DreamBooth training often relies on "trigger" words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it's important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. </Tip> Remember to use the trigger words for [CiroN2022/toy-face](https://hf.co/CiroN2022/toy-face) and [nerijs/pixel-art-xl](https://hf.co/nerijs/pixel-art-xl) (these are found in their repositories) in the prompt to generate an image. ```python prompt = "toy_face of a hacker with a hoodie, pixel art" image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) ).images[0] image ``` ![toy-face-pixel-art](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png) Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. > [!TIP] > Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the [Merge LoRAs](../using-diffusers/merge_loras) guide! To return to only using one adapter, use the [`~loaders.peft.PeftAdapterMixin.set_adapters`] method to activate the `"toy"` adapter: ```python pipe.set_adapters("toy") prompt = "toy_face of a hacker with a hoodie" lora_scale = 0.9 image = pipe( prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) ).images[0] image ``` Or to disable all adapters entirely, use the [`~loaders.peft.PeftAdapterMixin.disable_lora`] method to return the base model. ```python pipe.disable_lora() prompt = "toy_face of a hacker with a hoodie" image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![no-lora](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png)
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#merge-adapters
#merge-adapters
.md
272_2
For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called "scales") to [`~loaders.peft.PeftAdapterMixin.set_adapters`]. For example, here's how you can turn on the adapter for the `down` parts, but turn it off for the `mid` and `up` parts: ```python pipe.enable_lora() # enable lora again, after we disabled it above prompt = "toy_face of a hacker with a hoodie, pixel art" adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-down](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_down.png) Let's see how turning off the `down` part and turning on the `mid` and `up` part respectively changes the image. ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 1, "up": 0} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-mid](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mid.png) ```python adapter_weight_scales = { "unet": { "down": 0, "mid": 0, "up": 1} } pipe.set_adapters("pixel", adapter_weight_scales) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-text-and-up](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_up.png) Looks cool! This is a really powerful feature. You can use it to control the adapter strengths down to per-transformer level. And you can even use it for multiple adapters. ```python adapter_weight_scales_toy = 0.5 adapter_weight_scales_pixel = { "unet": { "down": 0.9, # all transformers in the down-part will use scale 0.9 # "mid" # because, in this example, "mid" is not given, all transformers in the mid part will use the default scale 1.0 "up": { "block_0": 0.6, # all 3 transformers in the 0th block in the up-part will use scale 0.6 "block_1": [0.4, 0.8, 1.0], # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively } } } pipe.set_adapters(["toy", "pixel"], [adapter_weight_scales_toy, adapter_weight_scales_pixel]) image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] image ``` ![block-lora-mixed](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mixed.png)
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#customize-adapters-strength
#customize-adapters-strength
.md
272_3
You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.StableDiffusionLoraLoaderMixin.get_active_adapters`] method to check the list of active adapters: ```py active_adapters = pipe.get_active_adapters() active_adapters ["toy", "pixel"] ``` You can also get the active adapters of each pipeline component with [`~diffusers.loaders.StableDiffusionLoraLoaderMixin.get_list_adapters`]: ```py list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ``` The [`~loaders.peft.PeftAdapterMixin.delete_adapters`] function completely removes an adapter and their LoRA layers from a model. ```py pipe.delete_adapters("toy") pipe.get_active_adapters() ["pixel"] ```
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/tutorials/using_peft_for_inference.md
https://huggingface.co./docs/diffusers/en/tutorials/using_peft_for_inference/#manage-adapters
#manage-adapters
.md
272_4