Diffusers documentation

Load LoRAs for inference

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.32.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Load LoRAs for inference

There are many adapter types (with LoRAs being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images.

In this tutorial, you’ll learn how to easily load and manage adapters for inference with the 🤗 PEFT integration in 🤗 Diffusers. You’ll use LoRA as the main adapter technique, so you’ll see the terms LoRA and adapter used interchangeably.

Let’s first install all the required libraries.

!pip install -q transformers accelerate peft diffusers

Now, load a pipeline with a Stable Diffusion XL (SDXL) checkpoint:

from diffusers import DiffusionPipeline
import torch

pipe_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda")

Next, load a CiroN2022/toy-face adapter with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which lets you easily switch between different LoRA checkpoints. Let’s call this adapter "toy".

pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")

Make sure to include the token toy_face in the prompt and then you can perform inference:

prompt = "toy_face of a hacker with a hoodie"

lora_scale = 0.9
image = pipe(
    prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0)
).images[0]
image

toy-face

With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images and call it "pixel".

The pipeline automatically sets the first loaded adapter ("toy") as the active adapter, but you can activate the "pixel" adapter with the ~PeftAdapterMixin.set_adapters method:

pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipe.set_adapters("pixel")

Make sure you include the token pixel art in your prompt to generate a pixel art image:

prompt = "a hacker with a hoodie, pixel art"
image = pipe(
    prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0)
).images[0]
image

pixel-art

By default, if the most up-to-date versions of PEFT and Transformers are detected, low_cpu_mem_usage is set to True to speed up the loading time of LoRA checkpoints.

Merge adapters

You can also merge different adapter checkpoints for inference to blend their styles together.

Once again, use the ~PeftAdapterMixin.set_adapters method to activate the pixel and toy adapters and specify the weights for how they should be merged.

pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0])

LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts.

Remember to use the trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl (these are found in their repositories) in the prompt to generate an image.

prompt = "toy_face of a hacker with a hoodie, pixel art"
image = pipe(
    prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0)
).images[0]
image

toy-face-pixel-art

Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters.

Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the Merge LoRAs guide!

To return to only using one adapter, use the ~PeftAdapterMixin.set_adapters method to activate the "toy" adapter:

pipe.set_adapters("toy")

prompt = "toy_face of a hacker with a hoodie"
lora_scale = 0.9
image = pipe(
    prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0)
).images[0]
image

Or to disable all adapters entirely, use the ~PeftAdapterMixin.disable_lora method to return the base model.

pipe.disable_lora()

prompt = "toy_face of a hacker with a hoodie"
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
image

no-lora

Customize adapters strength

For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called “scales”) to ~PeftAdapterMixin.set_adapters.

For example, here’s how you can turn on the adapter for the down parts, but turn it off for the mid and up parts:

pipe.enable_lora()  # enable lora again, after we disabled it above
prompt = "toy_face of a hacker with a hoodie, pixel art"
adapter_weight_scales = { "unet": { "down": 1, "mid": 0, "up": 0} }
pipe.set_adapters("pixel", adapter_weight_scales)
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
image

block-lora-text-and-down

Let’s see how turning off the down part and turning on the mid and up part respectively changes the image.

adapter_weight_scales = { "unet": { "down": 0, "mid": 1, "up": 0} }
pipe.set_adapters("pixel", adapter_weight_scales)
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
image

block-lora-text-and-mid

adapter_weight_scales = { "unet": { "down": 0, "mid": 0, "up": 1} }
pipe.set_adapters("pixel", adapter_weight_scales)
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
image

block-lora-text-and-up

Looks cool!

This is a really powerful feature. You can use it to control the adapter strengths down to per-transformer level. And you can even use it for multiple adapters.

adapter_weight_scales_toy = 0.5
adapter_weight_scales_pixel = {
    "unet": {
        "down": 0.9,  # all transformers in the down-part will use scale 0.9
        # "mid"  # because, in this example, "mid" is not given, all transformers in the mid part will use the default scale 1.0
        "up": {
            "block_0": 0.6,  # all 3 transformers in the 0th block in the up-part will use scale 0.6
            "block_1": [0.4, 0.8, 1.0],  # the 3 transformers in the 1st block in the up-part will use scales 0.4, 0.8 and 1.0 respectively
        }
    }
}
pipe.set_adapters(["toy", "pixel"], [adapter_weight_scales_toy, adapter_weight_scales_pixel])
image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0]
image

block-lora-mixed

Manage adapters

You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, use the get_active_adapters() method to check the list of active adapters:

active_adapters = pipe.get_active_adapters()
active_adapters
["toy", "pixel"]

You can also get the active adapters of each pipeline component with get_list_adapters():

list_adapters_component_wise = pipe.get_list_adapters()
list_adapters_component_wise
{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]}

The ~PeftAdapterMixin.delete_adapters function completely removes an adapter and their LoRA layers from a model.

pipe.delete_adapters("toy")
pipe.get_active_adapters()
["pixel"]
< > Update on GitHub