Diffusers documentation

Stable unCLIP

You are viewing v0.15.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Stable unCLIP

Stable unCLIP checkpoints are finetuned from stable diffusion 2.1 checkpoints to condition on CLIP image embeddings. Stable unCLIP also still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation.

To know more about the unCLIP process, check out the following paper:

Hierarchical Text-Conditional Image Generation with CLIP Latents by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen.

Tips

Stable unCLIP takes a noise_level as input during inference. noise_level determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings i.e. noise_level = 0.

Available checkpoints:

Text-to-Image Generation

Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co./kakaobrain/karlo-v1-alpha)
import torch
from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline
from diffusers.models import PriorTransformer
from transformers import CLIPTokenizer, CLIPTextModelWithProjection

prior_model_id = "kakaobrain/karlo-v1-alpha"
data_type = torch.float16
prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type)

prior_text_model_id = "openai/clip-vit-large-patch14"
prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id)
prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type)
prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler")
prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config)

stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small"

pipe = StableUnCLIPPipeline.from_pretrained(
    stable_unclip_model_id,
    torch_dtype=data_type,
    variant="fp16",
    prior_tokenizer=prior_tokenizer,
    prior_text_encoder=prior_text_model,
    prior=prior,
    prior_scheduler=prior_scheduler,
)

pipe = pipe.to("cuda")
wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular"

images = pipe(prompt=wave_prompt).images
images[0].save("waves.png")

For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use.

Text guided Image-to-Image Variation

from diffusers import StableUnCLIPImg2ImgPipeline
from diffusers.utils import load_image
import torch

pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
)
pipe = pipe.to("cuda")

url = "https://huggingface.co./datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
init_image = load_image(url)

images = pipe(init_image).images
images[0].save("variation_image.png")

Optionally, you can also pass a prompt to pipe such as:

prompt = "A fantasy landscape, trending on artstation"

images = pipe(init_image, prompt=prompt).images
images[0].save("variation_image_two.png")

Memory optimization

If you are short on GPU memory, you can enable smart CPU offloading so that models that are not needed immediately for a computation can be offloaded to CPU:

from diffusers import StableUnCLIPImg2ImgPipeline
from diffusers.utils import load_image
import torch

pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
)
# Offload to CPU.
pipe.enable_model_cpu_offload()

url = "https://huggingface.co./datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
init_image = load_image(url)

images = pipe(init_image).images
images[0]

Further memory optimizations are possible by enabling VAE slicing on the pipeline:

from diffusers import StableUnCLIPImg2ImgPipeline
from diffusers.utils import load_image
import torch

pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
)
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()

url = "https://huggingface.co./datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
init_image = load_image(url)

images = pipe(init_image).images
images[0]

StableUnCLIPPipeline

class diffusers.StableUnCLIPPipeline

< >

( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL )

Parameters

  • prior_tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • prior_text_encoder (CLIPTextModelWithProjection) — Frozen text-encoder.
  • prior (PriorTransformer) — The canonincal unCLIP prior to approximate the image embedding from the text embedding.
  • prior_scheduler (KarrasDiffusionSchedulers) — Scheduler used in the prior denoising process.
  • image_normalizer (StableUnCLIPImageNormalizer) — Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image embeddings after the noise has been applied.
  • image_noising_scheduler (KarrasDiffusionSchedulers) — Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined by noise_level in StableUnCLIPPipeline.__call__.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • text_encoder (CLIPTextModel) — Frozen text-encoder.
  • unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
  • scheduler (KarrasDiffusionSchedulers) — A scheduler to be used in combination with unet to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.

Pipeline for text-to-image generation using stable unCLIP.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< >

( prompt: typing.Union[str, typing.List[str], NoneType] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Optional[torch._C.Generator] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: typing.Optional[torch.FloatTensor] = None ) ImagePipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
  • num_inference_steps (int, optional, defaults to 20) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 10.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
  • callback (Callable, optional) — A function that will be called every callback_steps steps during inference. The function will be called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
  • callback_steps (int, optional, defaults to 1) — The frequency at which the callback function will be called. If not specified, the callback will be called at every step.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.cross_attention.
  • noise_level (int, optional, defaults to 0) — The amount of noise to add to the image embeddings. A higher noise_level increases the variance in the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details.
  • prior_num_inference_steps (int, optional, defaults to 25) — The number of denoising steps in the prior denoising process. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • prior_guidance_scale (float, optional, defaults to 4.0) — Guidance scale for the prior denoising process as defined in Classifier-Free Diffusion Guidance. prior_guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • prior_latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image embedding generation in the prior denoising process. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.

Returns

ImagePipelineOutput or tuple

~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import StableUnCLIPPipeline

>>> pipe = StableUnCLIPPipeline.from_pretrained(
...     "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16
... )  # TODO update model path
>>> pipe = pipe.to("cuda")

>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> images = pipe(prompt).images
>>> images[0].save("astronaut_horse.png")

enable_attention_slicing

< >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

  • slice_size (str or int, optional, defaults to "auto") — When "auto", halves the input to the attention heads, so attention will be computed in two steps. If "max", maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim must be a multiple of slice_size.

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

disable_attention_slicing

< >

( )

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.

enable_vae_slicing

< >

( )

Enable sliced VAE decoding.

When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

disable_vae_slicing

< >

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to computing decoding in one step.

enable_xformers_memory_efficient_attention

< >

( attention_op: typing.Optional[typing.Callable] = None )

Parameters

  • attention_op (Callable, optional) — Override the default None operator for use as op argument to the memory_efficient_attention() function of xFormers.

Enable memory efficient attention as implemented in xformers.

When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.

Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.

Examples:

>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)

disable_xformers_memory_efficient_attention

< >

( )

Disable memory efficient attention as implemented in xformers.

enable_model_cpu_offload

< >

( gpu_id = 0 )

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet.

enable_sequential_cpu_offload

< >

( gpu_id = 0 )

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called.

noise_image_embeddings

< >

( image_embeds: Tensor noise_level: int noise: typing.Optional[torch.FloatTensor] = None generator: typing.Optional[torch._C.Generator] = None )

Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher noise_level increases the variance in the final un-noised images.

The noise is applied in two ways

  1. A noise schedule is applied directly to the embeddings
  2. A vector of sinusoidal time embeddings are appended to the output.

In both cases, the amount of noise is controlled by the same noise_level.

The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.

StableUnCLIPImg2ImgPipeline

class diffusers.StableUnCLIPImg2ImgPipeline

< >

( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL )

Parameters

  • feature_extractor (CLIPImageProcessor) — Feature extractor for image pre-processing before being encoded.
  • image_encoder (CLIPVisionModelWithProjection) — CLIP vision model for encoding images.
  • image_normalizer (StableUnCLIPImageNormalizer) — Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image embeddings after the noise has been applied.
  • image_noising_scheduler (KarrasDiffusionSchedulers) — Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined by noise_level in StableUnCLIPPipeline.__call__.
  • tokenizer (CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
  • text_encoder (CLIPTextModel) — Frozen text-encoder.
  • unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
  • scheduler (KarrasDiffusionSchedulers) — A scheduler to be used in combination with unet to denoise the encoded image latents.
  • vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.

Pipeline for text-guided image to image generation using stable unCLIP.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< >

( image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Optional[torch._C.Generator] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None noise_level: int = 0 image_embeds: typing.Optional[torch.FloatTensor] = None ) ImagePipelineOutput or tuple

Parameters

  • prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be used or prompt is initialized to "".
  • image (torch.FloatTensor or PIL.Image.Image) — Image, or tensor representing an image batch. The image will be encoded to its CLIP embedding which the unet will be conditioned on. Note that the image is not encoded by the vae and then used as the latents in the denoising process such as in the standard stable diffusion text guided image variation process.
  • height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
  • width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
  • num_inference_steps (int, optional, defaults to 20) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
  • guidance_scale (float, optional, defaults to 10.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
  • negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
  • num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
  • eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
  • generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
  • latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
  • prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
  • negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
  • output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
  • callback (Callable, optional) — A function that will be called every callback_steps steps during inference. The function will be called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
  • callback_steps (int, optional, defaults to 1) — The frequency at which the callback function will be called. If not specified, the callback will be called at every step.
  • cross_attention_kwargs (dict, optional) — A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.cross_attention.
  • noise_level (int, optional, defaults to 0) — The amount of noise to add to the image embeddings. A higher noise_level increases the variance in the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details.
  • image_embeds (torch.FloatTensor, optional) — Pre-generated CLIP embeddings to condition the unet on. Note that these are not latents to be used in the denoising process. If you want to provide pre-generated latents, pass them to __call__ as latents.

Returns

ImagePipelineOutput or tuple

~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO

>>> from diffusers import StableUnCLIPImg2ImgPipeline

>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
...     "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16
... )  # TODO update model path
>>> pipe = pipe.to("cuda")

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))

>>> prompt = "A fantasy landscape, trending on artstation"

>>> images = pipe(prompt, init_image).images
>>> images[0].save("fantasy_landscape.png")

enable_attention_slicing

< >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

  • slice_size (str or int, optional, defaults to "auto") — When "auto", halves the input to the attention heads, so attention will be computed in two steps. If "max", maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim must be a multiple of slice_size.

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

disable_attention_slicing

< >

( )

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.

enable_vae_slicing

< >

( )

Enable sliced VAE decoding.

When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

disable_vae_slicing

< >

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to computing decoding in one step.

enable_xformers_memory_efficient_attention

< >

( attention_op: typing.Optional[typing.Callable] = None )

Parameters

  • attention_op (Callable, optional) — Override the default None operator for use as op argument to the memory_efficient_attention() function of xFormers.

Enable memory efficient attention as implemented in xformers.

When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference time. Speed up at training time is not guaranteed.

Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention is used.

Examples:

>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)

disable_xformers_memory_efficient_attention

< >

( )

Disable memory efficient attention as implemented in xformers.

enable_model_cpu_offload

< >

( gpu_id = 0 )

Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet.

enable_sequential_cpu_offload

< >

( gpu_id = 0 )

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called.

noise_image_embeddings

< >

( image_embeds: Tensor noise_level: int noise: typing.Optional[torch.FloatTensor] = None generator: typing.Optional[torch._C.Generator] = None )

Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher noise_level increases the variance in the final un-noised images.

The noise is applied in two ways

  1. A noise schedule is applied directly to the embeddings
  2. A vector of sinusoidal time embeddings are appended to the output.

In both cases, the amount of noise is controlled by the same noise_level.

The embeddings are normalized before the noise is applied and un-normalized after the noise is applied.