source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
<p class="text-gray-700">Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using πŸ€— Diffusers for the first time!</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./using-diffusers/loading_overview"
3_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques.</p> </a>
3_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
</a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual/philosophy" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library.</p> </a>
3_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/index.md
https://huggingface.co./docs/diffusers/en/index/#diffusers
.md
</a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./api/models/overview" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Technical descriptions of how πŸ€— Diffusers classes and methods work.</p> </a> </div> </div>
3_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
4_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
4_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#installation
.md
πŸ€— Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: - [PyTorch](https://pytorch.org/get-started/locally/) installation instructions - [Flax](https://flax.readthedocs.io/en/latest/) installation instructions
4_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-with-pip
.md
You should install πŸ€— Diffusers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: ```bash python -m venv .env ```
4_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-with-pip
.md
Start by creating a virtual environment in your project directory: ```bash python -m venv .env ``` Activate the virtual environment: ```bash source .env/bin/activate ``` You should also install πŸ€— Transformers because πŸ€— Diffusers relies on its models: <frameworkcontent> <pt> Note - PyTorch only supports Python 3.8 - 3.11 on Windows. ```bash pip install diffusers["torch"] transformers ``` </pt> <jax> ```bash pip install diffusers["flax"] transformers ``` </jax> </frameworkcontent>
4_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-with-conda
.md
After activating your virtual environment, with `conda` (maintained by the community): ```bash conda install -c conda-forge diffusers ```
4_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-from-source
.md
Before installing πŸ€— Diffusers from source, make sure you have PyTorch and πŸ€— Accelerate installed. To install πŸ€— Accelerate: ```bash pip install accelerate ``` Then install πŸ€— Diffusers from source: ```bash pip install git+https://github.com/huggingface/diffusers ``` This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments.
4_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-from-source
.md
The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day.
4_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#install-from-source
.md
We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) so we can fix it even sooner!
4_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#editable-install
.md
You will need an editable install if you'd like to: * Use the `main` version of the source code. * Contribute to πŸ€— Diffusers and need to test changes in the code. Clone the repository and install πŸ€— Diffusers with the following commands: ```bash git clone https://github.com/huggingface/diffusers.git cd diffusers ``` <frameworkcontent> <pt> ```bash pip install -e ".[torch]" ``` </pt> <jax> ```bash pip install -e ".[flax]" ``` </jax> </frameworkcontent>
4_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#editable-install
.md
<pt> ```bash pip install -e ".[torch]" ``` </pt> <jax> ```bash pip install -e ".[flax]" ``` </jax> </frameworkcontent> These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.10/site-packages/`, Python will also search the `~/diffusers/` folder you cloned to.
4_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#editable-install
.md
<Tip warning={true}> You must keep the `diffusers` folder if you want to keep using the library. </Tip> Now you can easily update your clone to the latest version of πŸ€— Diffusers with the following command: ```bash cd ~/diffusers/ git pull ``` Your Python environment will find the `main` version of πŸ€— Diffusers on the next run.
4_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#cache
.md
Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the `HF_HOME` or `HUGGINFACE_HUB_CACHE` environment variables or configuring the `cache_dir` parameter in methods like [`~DiffusionPipeline.from_pretrained`].
4_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#cache
.md
Cached files allow you to run πŸ€— Diffusers offline. To prevent πŸ€— Diffusers from connecting to the internet, set the `HF_HUB_OFFLINE` environment variable to `True` and πŸ€— Diffusers will only load previously downloaded files in the cache. ```shell export HF_HUB_OFFLINE=True ``` For more details about managing and cleaning the cache, take a look at the [caching](https://huggingface.co./docs/huggingface_hub/guides/manage-cache) guide.
4_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#telemetry-logging
.md
Our library gathers telemetry information during [`~DiffusionPipeline.from_pretrained`] requests. The data gathered includes the version of πŸ€— Diffusers and PyTorch/Flax, the requested model or pipeline class, and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. This usage data helps us debug issues and prioritize new features. Telemetry is only sent when loading models and pipelines from the Hub, and it is not collected if you're loading local files.
4_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/installation.md
https://huggingface.co./docs/diffusers/en/installation/#telemetry-logging
.md
Telemetry is only sent when loading models and pipelines from the Hub, and it is not collected if you're loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. You can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal: On Linux/MacOS: ```bash export DISABLE_TELEMETRY=YES ``` On Windows: ```bash set DISABLE_TELEMETRY=YES ```
4_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
5_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
5_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/#xformers
.md
We recommend [xFormers](https://github.com/facebookresearch/xformers) for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from `pip`: ```bash pip install xformers ``` <Tip>
5_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/#xformers
.md
Install xFormers from `pip`: ```bash pip install xformers ``` <Tip> The xFormers `pip` package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend [installing xFormers from the source](https://github.com/facebookresearch/xformers#installing-xformers). </Tip>
5_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/#xformers
.md
</Tip> After xFormers is installed, you can use `enable_xformers_memory_efficient_attention()` for faster inference and reduced memory consumption as shown in this [section](memory#memory-efficient-attention). <Tip warning={true}>
5_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xformers.md
https://huggingface.co./docs/diffusers/en/optimization/xformers/#xformers
.md
<Tip warning={true}> According to this [issue](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212), xFormers `v0.0.16` cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. </Tip>
5_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
6_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
6_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#pytorch-20
.md
πŸ€— Diffusers supports the latest optimizations from [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) which include: 1. A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. 2. [`torch.compile`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html), a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled.
6_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#pytorch-20
.md
Both of these optimizations require PyTorch 2.0 or later and πŸ€— Diffusers > 0.13.0. ```bash pip install --upgrade torch diffusers ```
6_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
[`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you're using PyTorch 2.0 and the latest version of πŸ€— Diffusers, so you don't need to add anything to your code.
6_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
However, if you want to explicitly enable it, you can set a [`DiffusionPipeline`] to use [`~models.attention_processor.AttnProcessor2_0`]: ```diff import torch from diffusers import DiffusionPipeline + from diffusers.models.attention_processor import AttnProcessor2_0
6_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + pipe.unet.set_attn_processor(AttnProcessor2_0())
6_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ``` SDPA should be as fast and memory efficient as `xFormers`; check the [benchmark](#benchmark) for more details.
6_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
``` SDPA should be as fast and memory efficient as `xFormers`; check the [benchmark](#benchmark) for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, [`~models.attention_processor.AttnProcessor`]. To revert to [`~models.attention_processor.AttnProcessor`], call the [`~UNet2DConditionModel.set_default_attn_processor`] function on the pipeline: ```diff import torch
6_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
```diff import torch from diffusers import DiffusionPipeline
6_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#scaled-dot-product-attention
.md
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + pipe.unet.set_default_attn_processor() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] ```
6_2_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#torchcompile
.md
The `torch.compile` function can often provide an additional speed-up to your PyTorch code. In πŸ€— Diffusers, it is usually best to wrap the UNet with `torch.compile` because it does most of the heavy lifting in the pipeline. ```python from diffusers import DiffusionPipeline import torch
6_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#torchcompile
.md
pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] ```
6_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#torchcompile
.md
images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] ``` Depending on GPU type, `torch.compile` can provide an *additional speed-up* of **5-300x** on top of SDPA! If you're using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), `torch.compile` is able to squeeze even more performance out of these GPUs.
6_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#torchcompile
.md
Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about `torch.compile`, refer to the [`torch_compile`](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) tutorial. > [!TIP]
6_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#torchcompile
.md
> [!TIP] > Learn more about other ways PyTorch 2.0 can help optimize your model in the [Accelerate inference of text-to-image diffusion models](../tutorials/fast_diffusion) tutorial.
6_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#benchmark
.md
We conducted a comprehensive benchmark with PyTorch 2.0's efficient attention implementation and `torch.compile` across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on πŸ€— Diffusers v0.17.0.dev0 to optimize `torch.compile` usage (see [here](https://github.com/huggingface/diffusers/pull/3313) for more details). Expand the dropdown below to find the code used to benchmark each pipeline: <details>
6_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-text-to-image
.md
```python from diffusers import DiffusionPipeline import torch path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) pipe = pipe.to("cuda") pipe.unet.to(memory_format=torch.channels_last) if run_compile: print("Run torch compile") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) prompt = "ghibli style, a fantasy landscape with castles"
6_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-text-to-image
.md
prompt = "ghibli style, a fantasy landscape with castles" for _ in range(3): images = pipe(prompt=prompt).images ```
6_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-image-to-image
.md
```python from diffusers import StableDiffusionImg2ImgPipeline from diffusers.utils import load_image import torch url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" init_image = load_image(url) init_image = init_image.resize((512, 512)) path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False
6_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-image-to-image
.md
path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) pipe = pipe.to("cuda") pipe.unet.to(memory_format=torch.channels_last) if run_compile: print("Run torch compile") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) prompt = "ghibli style, a fantasy landscape with castles"
6_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-image-to-image
.md
prompt = "ghibli style, a fantasy landscape with castles" for _ in range(3): image = pipe(prompt=prompt, image=init_image).images[0] ```
6_6_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-inpainting
.md
```python from diffusers import StableDiffusionInpaintPipeline from diffusers.utils import load_image import torch img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" init_image = load_image(img_url).resize((512, 512)) mask_image = load_image(mask_url).resize((512, 512))
6_7_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-inpainting
.md
init_image = load_image(img_url).resize((512, 512)) mask_image = load_image(mask_url).resize((512, 512)) path = "runwayml/stable-diffusion-inpainting" run_compile = True # Set True / False pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) pipe = pipe.to("cuda") pipe.unet.to(memory_format=torch.channels_last) if run_compile: print("Run torch compile") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
6_7_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#stable-diffusion-inpainting
.md
if run_compile: print("Run torch compile") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) prompt = "ghibli style, a fantasy landscape with castles" for _ in range(3): image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ```
6_7_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#controlnet
.md
```python from diffusers import StableDiffusionControlNetPipeline, ControlNetModel from diffusers.utils import load_image import torch url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" init_image = load_image(url) init_image = init_image.resize((512, 512)) path = "stable-diffusion-v1-5/stable-diffusion-v1-5"
6_8_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#controlnet
.md
init_image = load_image(url) init_image = init_image.resize((512, 512)) path = "stable-diffusion-v1-5/stable-diffusion-v1-5" run_compile = True # Set True / False controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) pipe = StableDiffusionControlNetPipeline.from_pretrained( path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True )
6_8_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#controlnet
.md
pipe = pipe.to("cuda") pipe.unet.to(memory_format=torch.channels_last) pipe.controlnet.to(memory_format=torch.channels_last) if run_compile: print("Run torch compile") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) prompt = "ghibli style, a fantasy landscape with castles" for _ in range(3): image = pipe(prompt=prompt, image=init_image).images[0] ```
6_8_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
```python from diffusers import DiffusionPipeline import torch run_compile = True # Set True / False
6_9_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) pipe_1.to("cuda") pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) pipe_2.to("cuda") pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) pipe_3.to("cuda")
6_9_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
pipe_1.unet.to(memory_format=torch.channels_last) pipe_2.unet.to(memory_format=torch.channels_last) pipe_3.unet.to(memory_format=torch.channels_last) if run_compile: pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) prompt = "the blue hulk"
6_9_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
prompt = "the blue hulk" prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16)
6_9_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
for _ in range(3): image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images ``` </details>
6_9_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images ``` </details> The graph below highlights the relative speed-ups for the [`StableDiffusionPipeline`] across five GPU families with PyTorch 2.0 and `torch.compile` enabled. The benchmarks for the following graphs are measured in *number of iterations/second*. ![t2i_speedup](https://huggingface.co./datasets/diffusers/docs-images/resolve/main/pt2_benchmarks/t2i_speedup.png)
6_9_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#deepfloyd-if-text-to-image--upscaling
.md
![t2i_speedup](https://huggingface.co./datasets/diffusers/docs-images/resolve/main/pt2_benchmarks/t2i_speedup.png) To give you an even better idea of how this speed-up holds for the other pipelines, consider the following graph for an A100 with PyTorch 2.0 and `torch.compile`: ![a100_numbers](https://huggingface.co./datasets/diffusers/docs-images/resolve/main/pt2_benchmarks/a100_numbers.png) In the following tables, we report our findings in terms of the *number of iterations/second*.
6_9_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#a100-batch-size-1
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 21.66 | 23.13 | 44.03 | 49.74 | | SD - img2img | 21.81 | 22.40 | 43.92 | 46.32 | | SD - inpaint | 22.24 | 23.23 | 43.76 | 49.25 | | SD - controlnet | 15.02 | 15.82 | 32.13 | 36.08 | | IF | 20.21 / <br>13.84 / <br>24.00 | 20.12 / <br>13.70 / <br>24.03 | ❌ | 97.34 / <br>27.23 / <br>111.66 |
6_10_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#a100-batch-size-1
.md
| IF | 20.21 / <br>13.84 / <br>24.00 | 20.12 / <br>13.70 / <br>24.03 | ❌ | 97.34 / <br>27.23 / <br>111.66 | | SDXL - txt2img | 8.64 | 9.9 | - | - |
6_10_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#a100-batch-size-4
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 11.6 | 13.12 | 14.62 | 17.27 | | SD - img2img | 11.47 | 13.06 | 14.66 | 17.25 | | SD - inpaint | 11.67 | 13.31 | 14.88 | 17.48 | | SD - controlnet | 8.28 | 9.38 | 10.51 | 12.41 | | IF | 25.02 | 18.04 | ❌ | 48.47 | | SDXL - txt2img | 2.44 | 2.74 | - | - |
6_11_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#a100-batch-size-16
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 3.04 | 3.6 | 3.83 | 4.68 | | SD - img2img | 2.98 | 3.58 | 3.83 | 4.67 | | SD - inpaint | 3.04 | 3.66 | 3.9 | 4.76 | | SD - controlnet | 2.15 | 2.58 | 2.74 | 3.35 | | IF | 8.78 | 9.82 | ❌ | 16.77 | | SDXL - txt2img | 0.64 | 0.72 | - | - |
6_12_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#v100-batch-size-1
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 18.99 | 19.14 | 20.95 | 22.17 | | SD - img2img | 18.56 | 19.18 | 20.95 | 22.11 | | SD - inpaint | 19.14 | 19.06 | 21.08 | 22.20 | | SD - controlnet | 13.48 | 13.93 | 15.18 | 15.88 | | IF | 20.01 / <br>9.08 / <br>23.34 | 19.79 / <br>8.98 / <br>24.10 | ❌ | 55.75 / <br>11.57 / <br>57.67 |
6_13_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#v100-batch-size-4
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 5.96 | 5.89 | 6.83 | 6.86 | | SD - img2img | 5.90 | 5.91 | 6.81 | 6.82 | | SD - inpaint | 5.99 | 6.03 | 6.93 | 6.95 | | SD - controlnet | 4.26 | 4.29 | 4.92 | 4.93 | | IF | 15.41 | 14.76 | ❌ | 22.95 |
6_14_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#v100-batch-size-16
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 1.66 | 1.66 | 1.92 | 1.90 | | SD - img2img | 1.65 | 1.65 | 1.91 | 1.89 | | SD - inpaint | 1.69 | 1.69 | 1.95 | 1.93 | | SD - controlnet | 1.19 | 1.19 | OOM after warmup | 1.36 | | IF | 5.43 | 5.29 | ❌ | 7.06 |
6_15_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#t4-batch-size-1
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 6.9 | 6.95 | 7.3 | 7.56 | | SD - img2img | 6.84 | 6.99 | 7.04 | 7.55 | | SD - inpaint | 6.91 | 6.7 | 7.01 | 7.37 | | SD - controlnet | 4.89 | 4.86 | 5.35 | 5.48 | | IF | 17.42 / <br>2.47 / <br>18.52 | 16.96 / <br>2.45 / <br>18.69 | ❌ | 24.63 / <br>2.47 / <br>23.39 | | SDXL - txt2img | 1.15 | 1.16 | - | - |
6_16_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#t4-batch-size-4
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 1.79 | 1.79 | 2.03 | 1.99 | | SD - img2img | 1.77 | 1.77 | 2.05 | 2.04 | | SD - inpaint | 1.81 | 1.82 | 2.09 | 2.09 | | SD - controlnet | 1.34 | 1.27 | 1.47 | 1.46 | | IF | 5.79 | 5.61 | ❌ | 7.39 | | SDXL - txt2img | 0.288 | 0.289 | - | - |
6_17_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#t4-batch-size-16
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 2.34s | 2.30s | OOM after 2nd iteration | 1.99s | | SD - img2img | 2.35s | 2.31s | OOM after warmup | 2.00s | | SD - inpaint | 2.30s | 2.26s | OOM after 2nd iteration | 1.95s | | SD - controlnet | OOM after 2nd iteration | OOM after 2nd iteration | OOM after warmup | OOM after warmup |
6_18_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#t4-batch-size-16
.md
| SD - controlnet | OOM after 2nd iteration | OOM after 2nd iteration | OOM after warmup | OOM after warmup | | IF * | 1.44 | 1.44 | ❌ | 1.94 | | SDXL - txt2img | OOM | OOM | - | - |
6_18_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-3090-batch-size-1
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 22.56 | 22.84 | 23.84 | 25.69 | | SD - img2img | 22.25 | 22.61 | 24.1 | 25.83 | | SD - inpaint | 22.22 | 22.54 | 24.26 | 26.02 | | SD - controlnet | 16.03 | 16.33 | 17.38 | 18.56 | | IF | 27.08 / <br>9.07 / <br>31.23 | 26.75 / <br>8.92 / <br>31.47 | ❌ | 68.08 / <br>11.16 / <br>65.29 |
6_19_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-3090-batch-size-4
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 6.46 | 6.35 | 7.29 | 7.3 | | SD - img2img | 6.33 | 6.27 | 7.31 | 7.26 | | SD - inpaint | 6.47 | 6.4 | 7.44 | 7.39 | | SD - controlnet | 4.59 | 4.54 | 5.27 | 5.26 | | IF | 16.81 | 16.62 | ❌ | 21.57 |
6_20_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-3090-batch-size-16
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 1.7 | 1.69 | 1.93 | 1.91 | | SD - img2img | 1.68 | 1.67 | 1.93 | 1.9 | | SD - inpaint | 1.72 | 1.71 | 1.97 | 1.94 | | SD - controlnet | 1.23 | 1.22 | 1.4 | 1.38 | | IF | 5.01 | 5.00 | ❌ | 6.33 |
6_21_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-4090-batch-size-1
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 40.5 | 41.89 | 44.65 | 49.81 | | SD - img2img | 40.39 | 41.95 | 44.46 | 49.8 | | SD - inpaint | 40.51 | 41.88 | 44.58 | 49.72 | | SD - controlnet | 29.27 | 30.29 | 32.26 | 36.03 | | IF | 69.71 / <br>18.78 / <br>85.49 | 69.13 / <br>18.80 / <br>85.56 | ❌ | 124.60 / <br>26.37 / <br>138.79 |
6_22_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-4090-batch-size-1
.md
| IF | 69.71 / <br>18.78 / <br>85.49 | 69.13 / <br>18.80 / <br>85.56 | ❌ | 124.60 / <br>26.37 / <br>138.79 | | SDXL - txt2img | 6.8 | 8.18 | - | - |
6_22_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-4090-batch-size-4
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 12.62 | 12.84 | 15.32 | 15.59 | | SD - img2img | 12.61 | 12,.79 | 15.35 | 15.66 | | SD - inpaint | 12.65 | 12.81 | 15.3 | 15.58 | | SD - controlnet | 9.1 | 9.25 | 11.03 | 11.22 | | IF | 31.88 | 31.14 | ❌ | 43.92 | | SDXL - txt2img | 2.19 | 2.35 | - | - |
6_23_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#rtx-4090-batch-size-16
.md
| **Pipeline** | **torch 2.0 - <br>no compile** | **torch nightly - <br>no compile** | **torch 2.0 - <br>compile** | **torch nightly - <br>compile** | |:---:|:---:|:---:|:---:|:---:| | SD - txt2img | 3.17 | 3.2 | 3.84 | 3.85 | | SD - img2img | 3.16 | 3.2 | 3.84 | 3.85 | | SD - inpaint | 3.17 | 3.2 | 3.85 | 3.85 | | SD - controlnet | 2.23 | 2.3 | 2.7 | 2.75 | | IF | 9.26 | 9.2 | ❌ | 13.31 | | SDXL - txt2img | 0.52 | 0.53 | - | - |
6_24_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#notes
.md
* Follow this [PR](https://github.com/huggingface/diffusers/pull/3313) for more details on the environment used for conducting the benchmarks. * For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1.
6_25_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/torch2.0.md
https://huggingface.co./docs/diffusers/en/optimization/torch2.0/#notes
.md
*Thanks to [Horace He](https://github.com/Chillee) from the PyTorch team for their support in improving our support of `torch.compile()` in Diffusers.*
6_25_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
7_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
7_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#how-to-run-stable-diffusion-with-core-ml
.md
[Core ML](https://developer.apple.com/documentation/coreml) is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift.
7_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#how-to-run-stable-diffusion-with-core-ml
.md
Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it's running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. <Tip>
7_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#how-to-run-stable-diffusion-with-core-ml
.md
<Tip> You can also run the `diffusers` Python codebase on Apple Silicon Macs using the `mps` accelerator built into PyTorch. This approach is explained in depth in [the mps guide](mps), but it is not compatible with native apps. </Tip>
7_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#stable-diffusion-core-ml-checkpoints
.md
Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed [a conversion tool](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) based on `diffusers` to convert the PyTorch checkpoints to Core ML.
7_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#stable-diffusion-core-ml-checkpoints
.md
Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you're interested in is already available in Core ML format: - the [Apple](https://huggingface.co./apple) organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base - [coreml community](https://huggingface.co./coreml-community) includes custom finetuned models
7_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#stable-diffusion-core-ml-checkpoints
.md
- [coreml community](https://huggingface.co./coreml-community) includes custom finetuned models - use this [filter](https://huggingface.co./models?pipeline_tag=text-to-image&library=coreml&p=2&sort=likes) to return all available Core ML checkpoints If you can't find the model you're interested in, we recommend you follow the instructions for [Converting Models to Core ML](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) by Apple.
7_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
Stable Diffusion models can be converted to different Core ML variants intended for different purposes:
7_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
- The type of attention blocks used. The attention operation is used to "pay attention" to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants:
7_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
* `split_einsum` ([introduced by Apple](https://machinelearning.apple.com/research/neural-engine-transformers)) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers.
7_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
* The "original" attention (the base implementation used in `diffusers`) is only compatible with CPU/GPU and not ANE. It can be *faster* to run your model on CPU + GPU using `original` attention than ANE. See [this performance benchmark](https://huggingface.co./blog/fast-mac-diffusers#performance-benchmarks) as well as some [additional measures provided by the community](https://github.com/huggingface/swift-coreml-diffusers/issues/31) for additional details. - The supported inference framework.
7_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
- The supported inference framework. * `packages` are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don't need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend.
7_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
* `compiled` models are required for Swift code. The `compiled` models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the [`--chunk-unet` conversion option](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). If you want to support native apps, then you need to select the `compiled` variant.
7_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#selecting-the-core-ml-variant-to-use
.md
The official Core ML Stable Diffusion [models](https://huggingface.co./apple/coreml-stable-diffusion-v1-4/tree/main) include these variants, but the community ones may vary: ``` coreml-stable-diffusion-v1-4 β”œβ”€β”€ README.md β”œβ”€β”€ original β”‚ β”œβ”€β”€ compiled β”‚ └── packages └── split_einsum β”œβ”€β”€ compiled └── packages ``` You can download and use the variant you need as shown below.
7_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#core-ml-inference-in-python
.md
Install the following libraries to run Core ML inference in Python: ```bash pip install huggingface_hub pip install git+https://github.com/apple/ml-stable-diffusion ```
7_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#download-the-model-checkpoints
.md
To run inference in Python, use one of the versions stored in the `packages` folders because the `compiled` ones are only compatible with Swift. You may choose whether you want to use `original` or `split_einsum` attention. This is how you'd download the `original` attention variant from the Hub to a directory called `models`: ```Python from huggingface_hub import snapshot_download from pathlib import Path repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/packages"
7_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#download-the-model-checkpoints
.md
repo_id = "apple/coreml-stable-diffusion-v1-4" variant = "original/packages" model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) print(f"Model downloaded at {model_path}") ```
7_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/coreml.md
https://huggingface.co./docs/diffusers/en/optimization/coreml/#inferencepython-inference
.md
Once you have downloaded a snapshot of the model, you can test it using Apple's Python script. ```shell python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ./models/coreml-stable-diffusion-v1-4_original_packages/original/packages -o </path/to/output/image> --compute-unit CPU_AND_GPU --seed 93 ```
7_6_0