source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferencepython-inference
.md
``` Pass the path of the downloaded checkpoint with `-i` flag to the script. `--compute-unit` indicates the hardware you want to allow for inference. It must be one of the following options: `ALL`, `CPU_AND_GPU`, `CPU_ONLY`, `CPU_AND_NE`. You may also provide an optional output path, and a seed for reproducibility.
7_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferencepython-inference
.md
The inference script assumes you're using the original version of the Stable Diffusion model, `CompVis/stable-diffusion-v1-4`. If you use another model, you *have* to specify its Hub id in the inference command line, using the `--model-version` option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co./stable-diffusion-v1-5/stable-diffusion-v1-5): ```shell
7_6_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferencepython-inference
.md
```shell python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version stable-diffusion-v1-5/stable-diffusion-v1-5 ```
7_6_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#core-ml-inference-in-swift
.md
Running inference in Swift is slightly faster than in Python because the models are already compiled in the `mlmodelc` format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward.
7_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#download
.md
To run inference in Swift on your Mac, you need one of the `compiled` checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the `compiled` variants: ```Python from huggingface_hub import snapshot_download from pathlib import Path repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/compiled"
7_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#download
.md
repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/compiled" model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) print(f"Model downloaded at {model_path}") ```
7_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferenceswift-inference
.md
To run inference, please clone Apple's repo: ```bash git clone https://github.com/apple/ml-stable-diffusion cd ml-stable-diffusion ``` And then use Apple's command line tool, [Swift Package Manager](https://www.swift.org/package-manager/#): ```bash swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" ```
7_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferenceswift-inference
.md
``` You have to specify in `--resource-path` one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension `.mlmodelc`. The `--compute-units` has to be one of these values: `all`, `cpuOnly`, `cpuAndGPU`, `cpuAndNeuralEngine`. For more details, please refer to the [instructions in Apple's repo](https://github.com/apple/ml-stable-diffusion).
7_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#supported-diffusers-features
.md
The Core ML models and inference code don't support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: - Core ML models are only suitable for inference. They can't be used for training or fine-tuning.
7_10_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#supported-diffusers-features
.md
- Core ML models are only suitable for inference. They can't be used for training or fine-tuning. - Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and `DPMSolverMultistepScheduler`, which we ported to Swift from our `diffusers` implementation. We recommend you use `DPMSolverMultistepScheduler`, since it produces the same quality in about half the steps.
7_10_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#supported-diffusers-features
.md
- Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple's [conversion and inference repo](https://github.com/apple/ml-stable-diffusion) and our own [swift-coreml-diffusers](https://github.com/huggingface/swift-coreml-diffusers) repos are intended as technology demonstrators to enable other developers to build upon.
7_10_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#supported-diffusers-features
.md
If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂.
7_10_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#native-diffusers-swift-app
.md
One easy way to run Stable Diffusion on your own Apple hardware is to use [our open-source Swift repo](https://github.com/huggingface/swift-coreml-diffusers), based on `diffusers` and Apple's conversion and inference repo. You can study the code, compile it with [Xcode](https://developer.apple.com/xcode/) and adapt it for your own needs. For your convenience, there's also a [standalone Mac app in the App Store](https://apps.apple.com/app/diffusers/id1666309574), so you can play with it without having to
7_11_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#native-diffusers-swift-app
.md
Mac app in the App Store](https://apps.apple.com/app/diffusers/id1666309574), so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can't wait to see what you'll build 🙂.
7_11_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
[T-GATE](https://github.com/HaozheLiu-ST/T-GATE/tree/main) accelerates inference for [Stable Diffusion](../api/pipelines/stable_diffusion/overview), [PixArt](../api/pipelines/pixart), and [Latency Consistency Model](../api/pipelines/latent_consistency_models.md) pipelines by skipping the cross-attention calculation once it converges. This method doesn't require any additional training and it can speed up inference from 10-50%. T-GATE is also compatible with other optimization methods like
8_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
additional training and it can speed up inference from 10-50%. T-GATE is also compatible with other optimization methods like [DeepCache](./deepcache).
8_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
Before you begin, make sure you install T-GATE. ```bash pip install tgate pip install -U torch diffusers transformers accelerate DeepCache ``` To use T-GATE with a pipeline, you need to use its corresponding loader. | Pipeline | T-GATE Loader | |---|---| | PixArt | TgatePixArtLoader | | Stable Diffusion XL | TgateSDXLLoader | | Stable Diffusion XL + DeepCache | TgateSDXLDeepCacheLoader | | Stable Diffusion | TgateSDLoader | | Stable Diffusion + DeepCache | TgateSDDeepCacheLoader |
8_0_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
| Stable Diffusion | TgateSDLoader | | Stable Diffusion + DeepCache | TgateSDDeepCacheLoader | Next, create a `TgateLoader` with a pipeline, the gate step (the time step to stop calculating the cross attention), and the number of inference steps. Then call the `tgate` method on the pipeline with a prompt, gate step, and the number of inference steps. Let's see how to enable this for several different pipelines. <hfoptions id="pipelines"> <hfoption id="PixArt">
8_0_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
Let's see how to enable this for several different pipelines. <hfoptions id="pipelines"> <hfoption id="PixArt"> Accelerate `PixArtAlphaPipeline` with T-GATE: ```py import torch from diffusers import PixArtAlphaPipeline from tgate import TgatePixArtLoader
8_0_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) gate_step = 8 inference_step = 25 pipe = TgatePixArtLoader( pipe, gate_step=gate_step, num_inference_steps=inference_step, ).to("cuda")
8_0_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
image = pipe.tgate( "An alpaca made of colorful building blocks, cyberpunk.", gate_step=gate_step, num_inference_steps=inference_step, ).images[0] ``` </hfoption> <hfoption id="Stable Diffusion XL"> Accelerate `StableDiffusionXLPipeline` with T-GATE: ```py import torch from diffusers import StableDiffusionXLPipeline from diffusers import DPMSolverMultistepScheduler from tgate import TgateSDXLLoader
8_0_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True, ) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) gate_step = 10 inference_step = 25 pipe = TgateSDXLLoader( pipe, gate_step=gate_step, num_inference_steps=inference_step, ).to("cuda")
8_0_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
image = pipe.tgate( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.", gate_step=gate_step, num_inference_steps=inference_step ).images[0] ``` </hfoption> <hfoption id="StableDiffusionXL with DeepCache"> Accelerate `StableDiffusionXLPipeline` with [DeepCache](https://github.com/horseee/DeepCache) and T-GATE: ```py import torch from diffusers import StableDiffusionXLPipeline from diffusers import DPMSolverMultistepScheduler from tgate import TgateSDXLDeepCacheLoader
8_0_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True, ) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) gate_step = 10 inference_step = 25 pipe = TgateSDXLDeepCacheLoader( pipe, cache_interval=3, cache_branch_id=0, ).to("cuda")
8_0_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
image = pipe.tgate( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.", gate_step=gate_step, num_inference_steps=inference_step ).images[0] ``` </hfoption> <hfoption id="Latent Consistency Model"> Accelerate `latent-consistency/lcm-sdxl` with T-GATE: ```py import torch from diffusers import StableDiffusionXLPipeline from diffusers import UNet2DConditionModel, LCMScheduler from diffusers import DPMSolverMultistepScheduler from tgate import TgateSDXLLoader
8_0_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
unet = UNet2DConditionModel.from_pretrained( "latent-consistency/lcm-sdxl", torch_dtype=torch.float16, variant="fp16", ) pipe = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", ) pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) gate_step = 1 inference_step = 4 pipe = TgateSDXLLoader( pipe, gate_step=gate_step, num_inference_steps=inference_step, lcm=True ).to("cuda")
8_0_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#t-gate
.md
image = pipe.tgate( "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.", gate_step=gate_step, num_inference_steps=inference_step ).images[0] ``` </hfoption> </hfoptions> T-GATE also supports [`StableDiffusionPipeline`] and [PixArt-alpha/PixArt-LCM-XL-2-1024-MS](https://hf.co/PixArt-alpha/PixArt-LCM-XL-2-1024-MS).
8_0_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#benchmarks
.md
| Model | MACs | Param | Latency | Zero-shot 10K-FID on MS-COCO | |-----------------------|----------|-----------|---------|---------------------------| | SD-1.5 | 16.938T | 859.520M | 7.032s | 23.927 | | SD-1.5 w/ T-GATE | 9.875T | 815.557M | 4.313s | 20.789 | | SD-2.1 | 38.041T | 865.785M | 16.121s | 22.609 |
8_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#benchmarks
.md
| SD-2.1 | 38.041T | 865.785M | 16.121s | 22.609 | | SD-2.1 w/ T-GATE | 22.208T | 815.433 M | 9.878s | 19.940 | | SD-XL | 149.438T | 2.570B | 53.187s | 24.628 | | SD-XL w/ T-GATE | 84.438T | 2.024B | 27.932s | 22.738 | | Pixart-Alpha | 107.031T | 611.350M | 61.502s | 38.669 |
8_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#benchmarks
.md
| Pixart-Alpha | 107.031T | 611.350M | 61.502s | 38.669 | | Pixart-Alpha w/ T-GATE | 65.318T | 462.585M | 37.867s | 35.825 | | DeepCache (SD-XL) | 57.888T | - | 19.931s | 23.755 | | DeepCache w/ T-GATE | 43.868T | - | 14.666s | 23.999 | | LCM (SD-XL) | 11.955T | 2.570B | 3.805s | 25.044 |
8_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#benchmarks
.md
| LCM (SD-XL) | 11.955T | 2.570B | 3.805s | 25.044 | | LCM w/ T-GATE | 11.171T | 2.024B | 3.533s | 25.028 | | LCM (Pixart-Alpha) | 8.563T | 611.350M | 4.733s | 36.086 | | LCM w/ T-GATE | 7.623T | 462.585M | 4.543s | 37.048 |
8_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tgate.md
https://huggingface.co./docs/diffusers/en/optimization/tgate/#benchmarks
.md
| LCM w/ T-GATE | 7.623T | 462.585M | 4.543s | 37.048 | The latency is tested on an NVIDIA 1080TI, MACs and Params are calculated with [calflops](https://github.com/MrYxJ/calculate-flops.pytorch), and the FID is calculated with [PytorchFID](https://github.com/mseitzer/pytorch-fid).
8_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#paraattention
.md
<div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-performance.png"> </div> <div class="flex justify-center"> <img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-performance.png"> </div>
9_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#paraattention
.md
</div> Large image and video generation models, such as [FLUX.1-dev](https://huggingface.co./black-forest-labs/FLUX.1-dev) and [HunyuanVideo](https://huggingface.co./tencent/HunyuanVideo), can be an inference challenge for real-time applications and deployment because of their size.
9_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#paraattention
.md
[ParaAttention](https://github.com/chengzeyi/ParaAttention) is a library that implements **context parallelism** and **first block cache**, and can be combined with other techniques (torch.compile, fp8 dynamic quantization), to accelerate inference. This guide will show you how to apply ParaAttention to FLUX.1-dev and HunyuanVideo on NVIDIA L20 GPUs. No optimizations are applied for our baseline benchmark, except for HunyuanVideo to avoid out-of-memory errors.
9_0_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#paraattention
.md
No optimizations are applied for our baseline benchmark, except for HunyuanVideo to avoid out-of-memory errors. Our baseline benchmark shows that FLUX.1-dev is able to generate a 1024x1024 resolution image in 28 steps in 26.36 seconds, and HunyuanVideo is able to generate 129 frames at 720p resolution in 30 steps in 3675.71 seconds. > [!TIP]
9_0_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#paraattention
.md
> [!TIP] > For even faster inference with context parallelism, try using NVIDIA A100 or H100 GPUs (if available) with NVLink support, especially when there is a large number of GPUs.
9_0_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
Caching the output of the transformers blocks in the model and reusing them in the next inference steps reduces the computation cost and makes inference faster.
9_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
However, it is hard to decide when to reuse the cache to ensure quality generated images or videos. ParaAttention directly uses the **residual difference of the first transformer block output** to approximate the difference among model outputs. When the difference is small enough, the residual difference of previous inference steps is reused. In other words, the denoising step is skipped. This achieves a 2x speedup on FLUX.1-dev and HunyuanVideo inference with very good quality. <figure>
9_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
This achieves a 2x speedup on FLUX.1-dev and HunyuanVideo inference with very good quality. <figure> <img src="https://huggingface.co./datasets/chengzeyi/documentation-images/resolve/main/diffusers/para-attn/ada-cache.png" alt="Cache in Diffusion Transformer" /> <figcaption>How AdaCache works, First Block Cache is a variant of it</figcaption> </figure> <hfoptions id="first-block-cache"> <hfoption id="FLUX-1.dev">
9_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
</figure> <hfoptions id="first-block-cache"> <hfoption id="FLUX-1.dev"> To apply first block cache on FLUX.1-dev, call `apply_cache_on_pipe` as shown below. 0.08 is the default residual difference value for FLUX models. ```python import time import torch from diffusers import FluxPipeline
9_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe, residual_diff_threshold=0.08) # Enable memory savings # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload()
9_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
# Enable memory savings # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload() begin = time.time() image = pipe( "A cat holding a sign that says hello world", num_inference_steps=28, ).images[0] end = time.time() print(f"Time: {end - begin:.2f}s")
9_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
print("Saving image to flux.png") image.save("flux.png") ``` | Optimizations | Original | FBCache rdt=0.06 | FBCache rdt=0.08 | FBCache rdt=0.10 | FBCache rdt=0.12 | | - | - | - | - | - | - |
9_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
| Preview | ![Original](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-original.png) | ![FBCache rdt=0.06](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.06.png) | ![FBCache rdt=0.08](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.08.png) | ![FBCache
9_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
| ![FBCache rdt=0.10](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.10.png) | ![FBCache rdt=0.12](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/flux-fbc-0.12.png) |
9_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
| Wall Time (s) | 26.36 | 21.83 | 17.01 | 16.00 | 13.78 | First Block Cache reduced the inference speed to 17.01 seconds compared to the baseline, or 1.55x faster, while maintaining nearly zero quality loss. </hfoption> <hfoption id="HunyuanVideo"> To apply First Block Cache on HunyuanVideo, `apply_cache_on_pipe` as shown below. 0.06 is the default residual difference value for HunyuanVideo models. ```python import time import torch
9_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
```python import time import torch from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video
9_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe, residual_diff_threshold=0.6) pipe.vae.enable_tiling()
9_1_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
apply_cache_on_pipe(pipe, residual_diff_threshold=0.6) pipe.vae.enable_tiling() begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=30, ).frames[0] end = time.time() print(f"Time: {end - begin:.2f}s")
9_1_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) ``` <video controls> <source src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-original.mp4" type="video/mp4"> Your browser does not support the video tag. </video> <small> HunyuanVideo without FBCache </small> <video controls>
9_1_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
Your browser does not support the video tag. </video> <small> HunyuanVideo without FBCache </small> <video controls> <source src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/para-attn/hunyuan-video-fbc.mp4" type="video/mp4"> Your browser does not support the video tag. </video> <small> HunyuanVideo with FBCache </small>
9_1_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#first-block-cache
.md
Your browser does not support the video tag. </video> <small> HunyuanVideo with FBCache </small> First Block Cache reduced the inference speed to 2271.06 seconds compared to the baseline, or 1.62x faster, while maintaining nearly zero quality loss. </hfoption> </hfoptions>
9_1_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
fp8 with dynamic quantization further speeds up inference and reduces memory usage. Both the activations and weights must be quantized in order to use the 8-bit [NVIDIA Tensor Cores](https://www.nvidia.com/en-us/data-center/tensor-cores/). Use `float8_weight_only` and `float8_dynamic_activation_float8_weight` to quantize the text encoder and transformer model.
9_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
Use `float8_weight_only` and `float8_dynamic_activation_float8_weight` to quantize the text encoder and transformer model. The default quantization method is per tensor quantization, but if your GPU supports row-wise quantization, you can also try it for better accuracy. Install [torchao](https://github.com/pytorch/ao/tree/main) with the command below. ```bash pip3 install -U torch torchao ```
9_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
```bash pip3 install -U torch torchao ``` [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) with `mode="max-autotune-no-cudagraphs"` or `mode="max-autotune"` selects the best kernel for performance. Compilation can take a long time if it's the first time the model is called, but it is worth it once the model has been compiled. This example only quantizes the transformer model, but you can also quantize the text encoder to reduce memory usage even more. > [!TIP]
9_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
> [!TIP] > Dynamic quantization can significantly change the distribution of the model output, so you need to change the `residual_diff_threshold` to a larger value for it to take effect. <hfoptions id="fp8-quantization"> <hfoption id="FLUX-1.dev"> ```python import time import torch from diffusers import FluxPipeline
9_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe( pipe, residual_diff_threshold=0.12, # Use a larger value to make the cache take effect ) from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only
9_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only quantize_(pipe.text_encoder, float8_weight_only()) quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) pipe.transformer = torch.compile( pipe.transformer, mode="max-autotune-no-cudagraphs", ) # Enable memory savings # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload()
9_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
# Enable memory savings # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload() for i in range(2): begin = time.time() image = pipe( "A cat holding a sign that says hello world", num_inference_steps=28, ).images[0] end = time.time() if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s")
9_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
print("Saving image to flux.png") image.save("flux.png") ``` fp8 dynamic quantization and torch.compile reduced the inference speed to 7.56 seconds compared to the baseline, or 3.48x faster. </hfoption> <hfoption id="HunyuanVideo"> ```python import time import torch from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video
9_2_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe)
9_2_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe) from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only quantize_(pipe.text_encoder, float8_weight_only()) quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) pipe.transformer = torch.compile( pipe.transformer, mode="max-autotune-no-cudagraphs", )
9_2_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
# Enable memory savings pipe.vae.enable_tiling() # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload() for i in range(2): begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=1 if i == 0 else 30, ).frames[0] end = time.time() if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s")
9_2_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) ``` A NVIDIA L20 GPU only has 48GB memory and could face out-of-memory (OOM) errors after compilation and if `enable_model_cpu_offload` isn't called because HunyuanVideo has very large activation tensors when running with high resolution and large number of frames. For GPUs with less than 80GB of memory, you can try reducing the resolution and number of frames to avoid OOM errors.
9_2_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#fp8-quantization
.md
Large video generation models are usually bottlenecked by the attention computations rather than the fully connected layers. These models don't significantly benefit from quantization and torch.compile. </hfoption> </hfoptions>
9_2_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
Context Parallelism parallelizes inference and scales with multiple GPUs. The ParaAttention compositional design allows you to combine Context Parallelism with First Block Cache and dynamic quantization. > [!TIP] > Refer to the [ParaAttention](https://github.com/chengzeyi/ParaAttention/tree/main) repository for detailed instructions and examples of how to scale inference with multiple GPUs.
9_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
If the inference process needs to be persistent and serviceable, it is suggested to use [torch.multiprocessing](https://pytorch.org/docs/stable/multiprocessing.html) to write your own inference processor. This can eliminate the overhead of launching the process and loading and recompiling the model. <hfoptions id="context-parallelism"> <hfoption id="FLUX-1.dev">
9_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
<hfoptions id="context-parallelism"> <hfoption id="FLUX-1.dev"> The code sample below combines First Block Cache, fp8 dynamic quantization, torch.compile, and Context Parallelism for the fastest inference speed. ```python import time import torch import torch.distributed as dist from diffusers import FluxPipeline
9_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
dist.init_process_group() torch.cuda.set_device(dist.get_rank()) pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, ).to("cuda") from para_attn.context_parallel import init_context_parallel_mesh from para_attn.context_parallel.diffusers_adapters import parallelize_pipe from para_attn.parallel_vae.diffusers_adapters import parallelize_vae
9_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
mesh = init_context_parallel_mesh( pipe.device.type, max_ring_dim_size=2, ) parallelize_pipe( pipe, mesh=mesh, ) parallelize_vae(pipe.vae, mesh=mesh._flatten()) from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe( pipe, residual_diff_threshold=0.12, # Use a larger value to make the cache take effect ) from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only
9_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only quantize_(pipe.text_encoder, float8_weight_only()) quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) torch._inductor.config.reorder_for_compute_comm_overlap = True pipe.transformer = torch.compile( pipe.transformer, mode="max-autotune-no-cudagraphs", )
9_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
# Enable memory savings # pipe.enable_model_cpu_offload(gpu_id=dist.get_rank()) # pipe.enable_sequential_cpu_offload(gpu_id=dist.get_rank()) for i in range(2): begin = time.time() image = pipe( "A cat holding a sign that says hello world", num_inference_steps=28, output_type="pil" if dist.get_rank() == 0 else "pt", ).images[0] end = time.time() if dist.get_rank() == 0: if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s")
9_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
if dist.get_rank() == 0: print("Saving image to flux.png") image.save("flux.png")
9_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
dist.destroy_process_group() ``` Save to `run_flux.py` and launch it with [torchrun](https://pytorch.org/docs/stable/elastic/run.html). ```bash # Use --nproc_per_node to specify the number of GPUs torchrun --nproc_per_node=2 run_flux.py ``` Inference speed is reduced to 8.20 seconds compared to the baseline, or 3.21x faster, with 2 NVIDIA L20 GPUs. On 4 L20s, inference speed is 3.90 seconds, or 6.75x faster. </hfoption> <hfoption id="HunyuanVideo">
9_3_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
</hfoption> <hfoption id="HunyuanVideo"> The code sample below combines First Block Cache and Context Parallelism for the fastest inference speed. ```python import time import torch import torch.distributed as dist from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video
9_3_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
dist.init_process_group() torch.cuda.set_device(dist.get_rank()) model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda")
9_3_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
from para_attn.context_parallel import init_context_parallel_mesh from para_attn.context_parallel.diffusers_adapters import parallelize_pipe from para_attn.parallel_vae.diffusers_adapters import parallelize_vae mesh = init_context_parallel_mesh( pipe.device.type, ) parallelize_pipe( pipe, mesh=mesh, ) parallelize_vae(pipe.vae, mesh=mesh._flatten()) from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe)
9_3_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe) # from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only # # torch._inductor.config.reorder_for_compute_comm_overlap = True # # quantize_(pipe.text_encoder, float8_weight_only()) # quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) # pipe.transformer = torch.compile( # pipe.transformer, mode="max-autotune-no-cudagraphs", # )
9_3_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
# Enable memory savings pipe.vae.enable_tiling() # pipe.enable_model_cpu_offload(gpu_id=dist.get_rank()) # pipe.enable_sequential_cpu_offload(gpu_id=dist.get_rank())
9_3_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
for i in range(2): begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=1 if i == 0 else 30, output_type="pil" if dist.get_rank() == 0 else "pt", ).frames[0] end = time.time() if dist.get_rank() == 0: if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s") if dist.get_rank() == 0: print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15)
9_3_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#context-parallelism
.md
dist.destroy_process_group() ``` Save to `run_hunyuan_video.py` and launch it with [torchrun](https://pytorch.org/docs/stable/elastic/run.html). ```bash # Use --nproc_per_node to specify the number of GPUs torchrun --nproc_per_node=8 run_hunyuan_video.py ``` Inference speed is reduced to 649.23 seconds compared to the baseline, or 5.66x faster, with 8 NVIDIA L20 GPUs. </hfoption> </hfoptions>
9_3_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#benchmarks
.md
<hfoptions id="conclusion"> <hfoption id="FLUX-1.dev"> | GPU Type | Number of GPUs | Optimizations | Wall Time (s) | Speedup | | - | - | - | - | - | | NVIDIA L20 | 1 | Baseline | 26.36 | 1.00x | | NVIDIA L20 | 1 | FBCache (rdt=0.08) | 17.01 | 1.55x | | NVIDIA L20 | 1 | FP8 DQ | 13.40 | 1.96x | | NVIDIA L20 | 1 | FBCache (rdt=0.12) + FP8 DQ | 7.56 | 3.48x | | NVIDIA L20 | 2 | FBCache (rdt=0.12) + FP8 DQ + CP | 4.92 | 5.35x | | NVIDIA L20 | 4 | FBCache (rdt=0.12) + FP8 DQ + CP | 3.90 | 6.75x | </hfoption>
9_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/para_attn.md
https://huggingface.co./docs/diffusers/en/optimization/para_attn/#benchmarks
.md
| NVIDIA L20 | 4 | FBCache (rdt=0.12) + FP8 DQ + CP | 3.90 | 6.75x | </hfoption> <hfoption id="HunyuanVideo"> | GPU Type | Number of GPUs | Optimizations | Wall Time (s) | Speedup | | - | - | - | - | - | | NVIDIA L20 | 1 | Baseline | 3675.71 | 1.00x | | NVIDIA L20 | 1 | FBCache | 2271.06 | 1.62x | | NVIDIA L20 | 2 | FBCache + CP | 1132.90 | 3.24x | | NVIDIA L20 | 4 | FBCache + CP | 718.15 | 5.12x | | NVIDIA L20 | 8 | FBCache + CP | 649.23 | 5.66x | </hfoption> </hfoptions>
9_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
10_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#reduce-memory-usage
.md
A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. <Tip>
10_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#reduce-memory-usage
.md
<Tip> In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to [Speed up inference](fp16). </Tip>
10_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#reduce-memory-usage
.md
</Tip> The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. | | latency | speed-up | | ---------------- | ------- | ------- | | original | 9.50s | x1 | | fp16 | 3.61s | x2.63 | | channels last | 3.30s | x2.88 |
10_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#reduce-memory-usage
.md
| original | 9.50s | x1 | | fp16 | 3.61s | x2.63 | | channels last | 3.30s | x2.88 | | traced UNet | 3.21s | x2.96 | | memory-efficient attention | 2.63s | x3.61 |
10_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#sliced-vae
.md
Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You'll likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to reduce memory use further if you have xFormers installed. To use sliced VAE, call [`~StableDiffusionPipeline.enable_vae_slicing`] on your pipeline before inference: ```python import torch from diffusers import StableDiffusionPipeline
10_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#sliced-vae
.md
pipe = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_vae_slicing() #pipe.enable_xformers_memory_efficient_attention() images = pipe([prompt] * 32).images ``` You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches.
10_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#tiled-vae
.md
Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to reduce memory use further if you have xFormers installed.
10_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#tiled-vae
.md
To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] on your pipeline before inference: ```python import torch from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler
10_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#tiled-vae
.md
pipe = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a beautiful landscape photograph" pipe.enable_vae_tiling() #pipe.enable_xformers_memory_efficient_attention()
10_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#tiled-vae
.md
image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] ``` The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller.
10_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#cpu-offloading
.md
Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call [`~StableDiffusionPipeline.enable_sequential_cpu_offload`]: ```Python import torch from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, )
10_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#cpu-offloading
.md
prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_sequential_cpu_offload() image = pipe(prompt).images[0] ```
10_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#cpu-offloading
.md
pipe.enable_sequential_cpu_offload() image = pipe(prompt).images[0] ``` CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. <Tip>
10_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md
https://huggingface.co./docs/diffusers/en/optimization/memory/#cpu-offloading
.md
<Tip> Consider using [model offloading](#model-offloading) if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won't be as large. </Tip> <Tip warning={true}> When using [`~StableDiffusionPipeline.enable_sequential_cpu_offload`], don't move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this [issue](https://github.com/huggingface/diffusers/issues/1934) for more information).
10_4_3