source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co./docs/diffusers/en/optimization/xdit/#stable-diffusion-3 | .md | <div class="flex justify-center">
<img src="https://huggingface.co./datasets/xDiT/documentation-images/resolve/main/performance/sd3/L40-SD3.png">
</div>
<div class="flex justify-center">
<img src="https://huggingface.co./datasets/xDiT/documentation-images/resolve/main/performance/sd3/A100-SD3.png">
</div> | 16_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co./docs/diffusers/en/optimization/xdit/#hunyuandit | .md | <div class="flex justify-center">
<img src="https://huggingface.co./datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/L40-HunyuanDiT.png">
</div>
<div class="flex justify-center">
<img src="https://huggingface.co./datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/V100-HunyuanDiT.png">
</div>
<div class="flex justify-center">
<img src="https://huggingface.co./datasets/xDiT/documentation-images/resolve/main/performance/hunuyuandit/T4-HunyuanDiT.png">
</div> | 16_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co./docs/diffusers/en/optimization/xdit/#hunyuandit | .md | </div>
More detailed performance metric can be found on our [github page](https://github.com/xdit-project/xDiT?tab=readme-ov-file#perf). | 16_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co./docs/diffusers/en/optimization/xdit/#reference | .md | [xDiT-project](https://github.com/xdit-project/xDiT)
[USP: A Unified Sequence Parallelism Approach for Long Context Generative AI](https://arxiv.org/abs/2405.07719)
[PipeFusion: Displaced Patch Pipeline Parallelism for Inference of Diffusion Transformer Models](https://arxiv.org/abs/2405.14430) | 16_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 17_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 17_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch [`mps`](https://pytorch.org/docs/stable/notes/mps.html) device, which uses the Metal framework to leverage the GPU on MacOS devices. You'll need to have:
- macOS computer with Apple silicon (M1/M2) hardware
- macOS 12.6 or later (13.0 or later recommended)
- arm64 version of Python
- [PyTorch 2.0](https://pytorch.org/get-started/locally/) (recommended) or 1.13 (minimum version supported for `mps`) | 17_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | - [PyTorch 2.0](https://pytorch.org/get-started/locally/) (recommended) or 1.13 (minimum version supported for `mps`)
The `mps` backend uses PyTorch's `.to()` interface to move the Stable Diffusion pipeline on to your M1 or M2 device:
```python
from diffusers import DiffusionPipeline | 17_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
pipe = pipe.to("mps")
# Recommended if your computer has < 64 GB of RAM
pipe.enable_attention_slicing() | 17_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image
```
<Tip warning={true}>
Generating multiple prompts in a batch can [crash](https://github.com/huggingface/diffusers/issues/363) or fail to work reliably. We believe this is related to the [`mps`](https://github.com/pytorch/pytorch/issues/84039) backend in PyTorch. While this is being investigated, you should iterate instead of batching.
</Tip> | 17_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | </Tip>
If you're using **PyTorch 1.13**, you need to "prime" the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result.
```diff
from diffusers import DiffusionPipeline | 17_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#metal-performance-shaders-mps | .md | pipe = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5").to("mps")
pipe.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
# First-time "warmup" pass if PyTorch version is 1.13
+ _ = pipe(prompt, num_inference_steps=1)
# Results match those from the CPU device after the warmup pass.
image = pipe(prompt).images[0]
``` | 17_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#troubleshoot | .md | M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. | 17_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#troubleshoot | .md | To prevent this from happening, we recommend *attention slicing* to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the [`~DiffusionPipeline.enable_attention_slicing`] function on your pipeline:
```py
from diffusers import DiffusionPipeline
import torch | 17_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/mps.md | https://huggingface.co./docs/diffusers/en/optimization/mps/#troubleshoot | .md | pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps")
pipeline.enable_attention_slicing()
```
Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we've observed *better performance* in most Apple silicon computers unless you have 64GB of RAM or more. | 17_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 18_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 18_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#onnx-runtime | .md | 🤗 [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with ONNX Runtime. You'll need to install 🤗 Optimum with the following command for ONNX Runtime support:
```bash
pip install -q optimum["onnxruntime"]
```
This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. | 18_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion | .md | To load and run inference, use the [`~optimum.onnxruntime.ORTStableDiffusionPipeline`]. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set `export=True`:
```python
from optimum.onnxruntime import ORTStableDiffusionPipeline | 18_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion | .md | model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5")
```
<Tip warning={true}>
Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.
</Tip> | 18_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion | .md | </Tip>
To export the pipeline in the ONNX format offline and use it later for inference,
use the [`optimum-cli export`](https://huggingface.co./docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:
```bash
optimum-cli export onnx --model stable-diffusion-v1-5/stable-diffusion-v1-5 sd_v15_onnx/
```
Then to perform inference (you don't have to specify `export=True` again):
```python
from optimum.onnxruntime import ORTStableDiffusionPipeline | 18_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion | .md | model_id = "sd_v15_onnx"
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
```
<div class="flex justify-center">
<img src="https://huggingface.co./datasets/optimum/documentation-images/resolve/main/onnxruntime/stable_diffusion_v1_5_ort_sail_boat.png">
</div> | 18_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion | .md | </div>
You can find more examples in 🤗 Optimum [documentation](https://huggingface.co./docs/optimum/), and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. | 18_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion-xl | .md | To load and run inference with SDXL, use the [`~optimum.onnxruntime.ORTStableDiffusionXLPipeline`]:
```python
from optimum.onnxruntime import ORTStableDiffusionXLPipeline | 18_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion-xl | .md | model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
```
To export the pipeline in the ONNX format and use it later for inference, use the [`optimum-cli export`](https://huggingface.co./docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) command:
```bash | 18_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/onnx.md | https://huggingface.co./docs/diffusers/en/optimization/onnx/#stable-diffusion-xl | .md | ```bash
optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/
```
SDXL in the ONNX format is supported for text-to-image and image-to-image. | 18_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 19_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 19_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | [Token merging](https://huggingface.co./papers/2303.17604) (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of [`StableDiffusionPipeline`].
Install ToMe from `pip`:
```bash
pip install tomesd
```
You can use ToMe from the [`tomesd`](https://github.com/dbolya/tomesd) library with the [`apply_patch`](https://github.com/dbolya/tomesd?tab=readme-ov-file#usage) function:
```diff | 19_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | ```diff
from diffusers import StableDiffusionPipeline
import torch
import tomesd | 19_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | pipeline = StableDiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")
+ tomesd.apply_patch(pipeline, ratio=0.5) | 19_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | image = pipeline("a photo of an astronaut riding a horse on mars").images[0]
```
The `apply_patch` function exposes a number of [arguments](https://github.com/dbolya/tomesd#usage) to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is `ratio` which controls the number of tokens that are merged during the forward pass. | 19_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | As reported in the [paper](https://huggingface.co./papers/2303.17604), ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the `ratio`, you can speed-up inference even further, but at the cost of some degraded image quality.
To test the quality of the generated images, we sampled a few prompts from [Parti Prompts](https://parti.research.google/) and performed inference with the [`StableDiffusionPipeline`] with the following settings: | 19_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#token-merging | .md | <div class="flex justify-center">
<img src="https://huggingface.co./datasets/diffusers/docs-images/resolve/main/tome/tome_samples.png">
</div>
We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this [WandB report](https://wandb.ai/sayakpaul/tomesd-results/runs/23j4bj3i?workspace=). If you're interested in reproducing this experiment, use this [script](https://gist.github.com/sayakpaul/8cac98d7f22399085a060992f411ecbd). | 19_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | We also benchmarked the impact of `tomesd` on the [`StableDiffusionPipeline`] with [xFormers](https://huggingface.co./docs/diffusers/optimization/xformers) enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment:
```bash
- `diffusers` version: 0.15.1
- Python version: 3.8.16
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Huggingface_hub version: 0.13.2
- Transformers version: 4.27.2
- Accelerate version: 0.18.0 | 19_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | - Huggingface_hub version: 0.13.2
- Transformers version: 4.27.2
- Accelerate version: 0.18.0
- xFormers version: 0.0.16
- tomesd version: 0.1.2
```
To reproduce this benchmark, feel free to use this [script](https://gist.github.com/sayakpaul/27aec6bca7eb7b0e0aa4112205850335). The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. | 19_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | **GPU** | **Resolution** | **Batch size** | **Vanilla** | **ToMe** | **ToMe + xFormers** |
|----------|----------------|----------------|-------------|----------------|---------------------|
| **A100** | 512 | 10 | 6.88 | 5.26 (+23.55%) | 4.69 (+31.83%) |
| | 768 | 10 | OOM | 14.71 | 11 |
| | | 8 | OOM | 11.56 | 8.84 | | 19_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 8 | OOM | 11.56 | 8.84 |
| | | 4 | OOM | 5.98 | 4.66 |
| | | 2 | 4.99 | 3.24 (+35.07%) | 2.1 (+37.88%) |
| | | 1 | 3.29 | 2.24 (+31.91%) | 2.03 (+38.3%) |
| | 1024 | 10 | OOM | OOM | OOM | | 19_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | 1024 | 10 | OOM | OOM | OOM |
| | | 8 | OOM | OOM | OOM |
| | | 4 | OOM | 12.51 | 9.09 |
| | | 2 | OOM | 6.52 | 4.96 |
| | | 1 | 6.4 | 3.61 (+43.59%) | 2.81 (+56.09%) | | 19_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 1 | 6.4 | 3.61 (+43.59%) | 2.81 (+56.09%) |
| **V100** | 512 | 10 | OOM | 10.03 | 9.29 |
| | | 8 | OOM | 8.05 | 7.47 |
| | | 4 | 5.7 | 4.3 (+24.56%) | 3.98 (+30.18%) |
| | | 2 | 3.14 | 2.43 (+22.61%) | 2.27 (+27.71%) | | 19_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 2 | 3.14 | 2.43 (+22.61%) | 2.27 (+27.71%) |
| | | 1 | 1.88 | 1.57 (+16.49%) | 1.57 (+16.49%) |
| | 768 | 10 | OOM | OOM | 23.67 |
| | | 8 | OOM | OOM | 18.81 |
| | | 4 | OOM | 11.81 | 9.7 | | 19_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 4 | OOM | 11.81 | 9.7 |
| | | 2 | OOM | 6.27 | 5.2 |
| | | 1 | 5.43 | 3.38 (+37.75%) | 2.82 (+48.07%) |
| | 1024 | 10 | OOM | OOM | OOM |
| | | 8 | OOM | OOM | OOM | | 19_2_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 8 | OOM | OOM | OOM |
| | | 4 | OOM | OOM | 19.35 |
| | | 2 | OOM | 13 | 10.78 |
| | | 1 | OOM | 6.66 | 5.54 | | 19_2_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/tome.md | https://huggingface.co./docs/diffusers/en/optimization/tome/#benchmarks | .md | | | | 1 | OOM | 6.66 | 5.54 |
As seen in the tables above, the speed-up from `tomesd` becomes more pronounced for larger image resolutions. It is also interesting to note that with `tomesd`, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with [`torch.compile`](torch2.0). | 19_2_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 20_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 20_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | [DreamBooth](https://huggingface.co./papers/2208.12242) is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. | 20_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | If you're training on a GPU with limited vRAM, you should try enabling the `gradient_checkpointing` and `mixed_precision` parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. | 20_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | This guide will explore the [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
``` | 20_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | ```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Navigate to the example folder with the training script and install the required dependencies for the script you're using:
<hfoptions id="installation">
<hfoption id="PyTorch">
```bash
cd examples/dreambooth
pip install -r requirements.txt
```
</hfoption>
<hfoption id="Flax">
```bash
cd examples/dreambooth
pip install -r requirements_flax.txt
```
</hfoption>
</hfoptions>
<Tip> | 20_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | ```bash
cd examples/dreambooth
pip install -r requirements_flax.txt
```
</hfoption>
</hfoptions>
<Tip>
🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co./docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
``` | 20_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | </Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
```
To setup a default 🤗 Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```py
from accelerate.utils import write_basic_config | 20_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.
<Tip> | 20_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#dreambooth | .md | <Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) and let us know if you have any questions or concerns.
</Tip> | 20_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#script-parameters | .md | <Tip warning={true}>
DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the [Training Stable Diffusion with Dreambooth using 🧨 Diffusers](https://huggingface.co./blog/dreambooth) blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters.
</Tip> | 20_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#script-parameters | .md | </Tip>
The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L228) function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you'd like. | 20_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#script-parameters | .md | For example, to train in the bf16 format:
```bash
accelerate launch train_dreambooth.py \
--mixed_precision="bf16"
```
Some basic and important parameters to know and specify are:
- `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model
- `--instance_data_dir`: path to a folder containing the training dataset (example images)
- `--instance_prompt`: the text prompt that contains the special word for the example images | 20_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#script-parameters | .md | - `--instance_prompt`: the text prompt that contains the special word for the example images
- `--train_text_encoder`: whether to also train the text encoder
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub | 20_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#script-parameters | .md | - `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command | 20_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#min-snr-weighting | .md | The [Min-SNR](https://huggingface.co./papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script.
Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash
accelerate launch train_dreambooth.py \ | 20_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#min-snr-weighting | .md | Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash
accelerate launch train_dreambooth.py \
--snr_gamma=5.0
``` | 20_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#prior-preservation-loss | .md | Prior preservation loss is a method that uses a model's own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions.
- `--with_prior_preservation`: whether to use prior preservation loss | 20_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#prior-preservation-loss | .md | - `--with_prior_preservation`: whether to use prior preservation loss
- `--prior_loss_weight`: controls the influence of the prior preservation loss on the model
- `--class_data_dir`: path to a folder containing the generated class sample images
- `--class_prompt`: the text prompt describing the class of the generated sample images
```bash
accelerate launch train_dreambooth.py \
--with_prior_preservation \
--prior_loss_weight=1.0 \
--class_data_dir="path/to/class/images" \ | 20_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#prior-preservation-loss | .md | --with_prior_preservation \
--prior_loss_weight=1.0 \
--class_data_dir="path/to/class/images" \
--class_prompt="text prompt describing class"
``` | 20_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#train-text-encoder | .md | To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you'll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by:
```bash
accelerate launch train_dreambooth.py \
--train_text_encoder
``` | 20_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | DreamBooth comes with its own dataset classes:
- [`DreamBoothDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L604): preprocesses the images and class images, and tokenizes the prompts for training
- [`PromptDataset`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L738): generates the prompt embeddings to generate the class images | 20_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | If you enabled [prior preservation loss](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L842), the class images are generated here:
```py
sample_dataset = PromptDataset(args.class_prompt, num_new_images)
sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) | 20_6_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | sample_dataloader = accelerator.prepare(sample_dataloader)
pipeline.to(accelerator.device) | 20_6_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | for example in tqdm(
sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process
):
images = pipeline(example["prompt"]).images
``` | 20_6_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | Next is the [`main()`](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L799) function which handles setting up the dataset for training and the training loop itself. The script loads the [tokenizer](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L898), [scheduler and | 20_6_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | [scheduler and models](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L912C1-L912C1): | 20_6_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | ```py
# Load the tokenizer
if args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False)
elif args.pretrained_model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
args.pretrained_model_name_or_path,
subfolder="tokenizer",
revision=args.revision,
use_fast=False,
) | 20_6_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | # Load scheduler and models
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
text_encoder = text_encoder_cls.from_pretrained(
args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
)
if model_has_vae(args):
vae = AutoencoderKL.from_pretrained(
args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision
)
else:
vae = None | 20_6_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | unet = UNet2DConditionModel.from_pretrained(
args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision
)
```
Then, it's time to [create the training dataset](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1073) and DataLoader from `DreamBoothDataset`:
```py
train_dataset = DreamBoothDataset(
instance_data_root=args.instance_data_dir,
instance_prompt=args.instance_prompt, | 20_6_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | ```py
train_dataset = DreamBoothDataset(
instance_data_root=args.instance_data_dir,
instance_prompt=args.instance_prompt,
class_data_root=args.class_data_dir if args.with_prior_preservation else None,
class_prompt=args.class_prompt,
class_num=args.num_class_images,
tokenizer=tokenizer,
size=args.resolution,
center_crop=args.center_crop,
encoder_hidden_states=pre_computed_encoder_hidden_states,
class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, | 20_6_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states,
tokenizer_max_length=args.tokenizer_max_length,
) | 20_6_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | train_dataloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=args.train_batch_size,
shuffle=True,
collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation),
num_workers=args.dataloader_num_workers,
)
``` | 20_6_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | num_workers=args.dataloader_num_workers,
)
```
Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/072e00897a7cf4302c347a63ec917b4b8add16d4/examples/dreambooth/train_dreambooth.py#L1151) takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. | 20_6_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#training-script | .md | If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 20_6_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | You're now ready to launch the training script! 🚀
For this guide, you'll download some images of a [dog](https://huggingface.co./datasets/diffusers/dog-example) and store them in a directory. But remember, you can create and use your own dataset if you want (see the [Create a dataset for training](create_dataset) guide).
```py
from huggingface_hub import snapshot_download | 20_7_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | local_dir = "./dog"
snapshot_download(
"diffusers/dog-example",
local_dir=local_dir,
repo_type="dataset",
ignore_patterns=".gitattributes",
)
```
Set the environment variable `MODEL_NAME` to a model id on the Hub or a path to a local model, `INSTANCE_DIR` to the path where you just downloaded the dog images to, and `OUTPUT_DIR` to where you want to save the model. You'll use `sks` as the special word to tie the training to. | 20_7_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | If you're interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command:
```bash
--validation_prompt="a photo of a sks dog"
--num_validation_images=4
--validation_steps=100
```
One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth.
<hfoptions id="gpu-select">
<hfoption id="16GB"> | 20_7_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | <hfoptions id="gpu-select">
<hfoption id="16GB">
On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes:
```py
pip install bitsandbytes
```
Then, add the following parameter to your training command:
```bash
accelerate launch train_dreambooth.py \
--gradient_checkpointing \
--use_8bit_adam \
```
</hfoption>
<hfoption id="12GB"> | 20_7_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | accelerate launch train_dreambooth.py \
--gradient_checkpointing \
--use_8bit_adam \
```
</hfoption>
<hfoption id="12GB">
On a 12GB GPU, you'll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to `None` instead of zero to reduce your memory-usage.
```bash
accelerate launch train_dreambooth.py \
--use_8bit_adam \
--gradient_checkpointing \
--enable_xformers_memory_efficient_attention \
--set_grads_to_none \
```
</hfoption>
<hfoption id="8GB"> | 20_7_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | --enable_xformers_memory_efficient_attention \
--set_grads_to_none \
```
</hfoption>
<hfoption id="8GB">
On a 8GB GPU, you'll need [DeepSpeed](https://www.deepspeed.ai/) to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory.
Run the following command to configure your 🤗 Accelerate environment:
```bash
accelerate config
``` | 20_7_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | ```bash
accelerate config
```
During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the [DeepSpeed documentation](https://huggingface.co./docs/accelerate/usage_guides/deepspeed) for more configuration options. | 20_7_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam [`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch.
bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. | 20_7_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment.
That's it! You don't need to add any additional parameters to your training command.
</hfoption>
</hfoptions>
<hfoptions id="training-inference">
<hfoption id="PyTorch">
```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export INSTANCE_DIR="./dog"
export OUTPUT_DIR="path_to_saved_model" | 20_7_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--learning_rate=5e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=400 \
--push_to_hub
```
</hfoption>
<hfoption id="Flax">
```bash
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export INSTANCE_DIR="./dog" | 20_7_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | </hfoption>
<hfoption id="Flax">
```bash
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
export INSTANCE_DIR="./dog"
export OUTPUT_DIR="path-to-save-model" | 20_7_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | python train_dreambooth_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a photo of sks dog" \
--resolution=512 \
--train_batch_size=1 \
--learning_rate=5e-6 \
--max_train_steps=400 \
--push_to_hub
```
</hfoption>
</hfoptions>
Once training is complete, you can use your newly trained model for inference!
<Tip> | 20_7_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | ```
</hfoption>
</hfoptions>
Once training is complete, you can use your newly trained model for inference!
<Tip>
Can't wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed.
```py
from diffusers import DiffusionPipeline, UNet2DConditionModel
from transformers import CLIPTextModel
import torch | 20_7_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet")
# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder
text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder")
pipeline = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16,
).to("cuda") | 20_7_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("dog-bucket.png")
```
</Tip>
<hfoptions id="training-inference">
<hfoption id="PyTorch">
```py
from diffusers import DiffusionPipeline
import torch | 20_7_14 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("dog-bucket.png")
```
</hfoption>
<hfoption id="Flax">
```py
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline | 20_7_15 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path-to-your-trained-model", dtype=jax.numpy.bfloat16)
prompt = "A photo of sks dog in a bucket"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids) | 20_7_16 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#launch-the-script | .md | images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
image.save("dog-bucket.png")
```
</hfoption>
</hfoptions> | 20_7_17 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#lora | .md | LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) script to train with LoRA.
The LoRA training script is discussed in more detail in the [LoRA training](lora) guide. | 20_8_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#stable-diffusion-xl | .md | Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [train_dreambooth_lora_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py) script to train a SDXL model with LoRA.
The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide. | 20_9_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co./docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | DeepFloyd IF is a cascading pixel diffusion model with three stages. The first stage generates a base image and the second and third stages progressively upscales the base image into a high-resolution 1024x1024 image. Use the [train_dreambooth_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py) or [train_dreambooth.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) scripts to train a DeepFloyd IF model with | 20_10_0 |