aMUSEd
aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen.
Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once.
Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes.
The abstract from the paper is:
We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions.
Model | Params |
---|---|
amused-256 | 603M |
amused-512 | 608M |
AmusedPipeline
class diffusers.AmusedPipeline
< source >( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler )
__call__
< source >( prompt: typing.Union[str, typing.List[str], NoneType] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Optional[torch._C.Generator] = None latents: typing.Optional[torch.IntTensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_encoder_hidden_states: typing.Optional[torch.Tensor] = None output_type = 'pil' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: int = 1 cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: typing.Tuple[int, int] = (0, 0) temperature: typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0) ) → ImagePipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. - height (
int
, optional, defaults toself.transformer.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated image. - width (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated image. - num_inference_steps (
int
, optional, defaults to 16) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 10.0) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. - generator (
torch.Generator
, optional) — Atorch.Generator
to make generation deterministic. - latents (
torch.IntTensor
, optional) — Pre-generated tokens representing latent vectors inself.vqvae
, to be used as inputs for image gneration. If not provided, the starting latents will be completely masked. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. A single vector from the pooled and projected final hidden states. - encoder_hidden_states (
torch.Tensor
, optional) — Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. - negative_encoder_hidden_states (
torch.Tensor
, optional) — Analogous toencoder_hidden_states
for the positive prompt. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - callback (
Callable
, optional) — A function that calls everycallback_steps
steps during inference. The function is called with the following arguments:callback(step: int, timestep: int, latents: torch.Tensor)
. - callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function is called. If not specified, the callback is called at every step. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. - micro_conditioning_aesthetic_score (
int
, optional, defaults to 6) — The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of https://arxiv.org/abs/2307.01952. - micro_conditioning_crop_coord (
Tuple[int]
, optional, defaults to (0, 0)) — The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. - temperature (
Union[int, Tuple[int, int], List[int]]
, optional, defaults to (2, 0)) — Configures the temperature scheduler onself.scheduler
seeAmusedScheduler#set_timesteps
.
Returns
ImagePipelineOutput or tuple
If return_dict
is True
, ImagePipelineOutput is returned, otherwise a
tuple
is returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
enable_xformers_memory_efficient_attention
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
- attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
class diffusers.AmusedImg2ImgPipeline
< source >( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler )
__call__
< source >( prompt: typing.Union[str, typing.List[str], NoneType] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Optional[torch._C.Generator] = None prompt_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_encoder_hidden_states: typing.Optional[torch.Tensor] = None output_type = 'pil' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: int = 1 cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: typing.Tuple[int, int] = (0, 0) temperature: typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0) ) → ImagePipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. - image (
torch.Tensor
,PIL.Image.Image
,np.ndarray
,List[torch.Tensor]
,List[PIL.Image.Image]
, orList[np.ndarray]
) —Image
, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between[0, 1]
If it’s a tensor or a list or tensors, the expected shape should be(B, C, H, W)
or(C, H, W)
. If it is a numpy array or a list of arrays, the expected shape should be(B, H, W, C)
or(H, W, C)
It can also accept image latents asimage
, but if passing latents directly it is not encoded again. - strength (
float
, optional, defaults to 0.5) — Indicates extent to transform the referenceimage
. Must be between 0 and 1.image
is used as a starting point and more noise is added the higher thestrength
. The number of denoising steps depends on the amount of noise initially added. Whenstrength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified innum_inference_steps
. A value of 1 essentially ignoresimage
. - num_inference_steps (
int
, optional, defaults to 12) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 10.0) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. - generator (
torch.Generator
, optional) — Atorch.Generator
to make generation deterministic. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. A single vector from the pooled and projected final hidden states. - encoder_hidden_states (
torch.Tensor
, optional) — Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. - negative_encoder_hidden_states (
torch.Tensor
, optional) — Analogous toencoder_hidden_states
for the positive prompt. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - callback (
Callable
, optional) — A function that calls everycallback_steps
steps during inference. The function is called with the following arguments:callback(step: int, timestep: int, latents: torch.Tensor)
. - callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function is called. If not specified, the callback is called at every step. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. - micro_conditioning_aesthetic_score (
int
, optional, defaults to 6) — The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of https://arxiv.org/abs/2307.01952. - micro_conditioning_crop_coord (
Tuple[int]
, optional, defaults to (0, 0)) — The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. - temperature (
Union[int, Tuple[int, int], List[int]]
, optional, defaults to (2, 0)) — Configures the temperature scheduler onself.scheduler
seeAmusedScheduler#set_timesteps
.
Returns
ImagePipelineOutput or tuple
If return_dict
is True
, ImagePipelineOutput is returned, otherwise a
tuple
is returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import AmusedImg2ImgPipeline
>>> from diffusers.utils import load_image
>>> pipe = AmusedImg2ImgPipeline.from_pretrained(
... "amused/amused-512", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> prompt = "winter mountains"
>>> input_image = (
... load_image(
... "https://huggingface.co./datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg"
... )
... .resize((512, 512))
... .convert("RGB")
... )
>>> image = pipe(prompt, input_image).images[0]
enable_xformers_memory_efficient_attention
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
- attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
class diffusers.AmusedInpaintPipeline
< source >( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler )
__call__
< source >( prompt: typing.Union[str, typing.List[str], NoneType] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None mask_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Optional[torch._C.Generator] = None prompt_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_encoder_hidden_states: typing.Optional[torch.Tensor] = None output_type = 'pil' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: int = 1 cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: typing.Tuple[int, int] = (0, 0) temperature: typing.Union[int, typing.Tuple[int, int], typing.List[int]] = (2, 0) ) → ImagePipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. - image (
torch.Tensor
,PIL.Image.Image
,np.ndarray
,List[torch.Tensor]
,List[PIL.Image.Image]
, orList[np.ndarray]
) —Image
, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between[0, 1]
If it’s a tensor or a list or tensors, the expected shape should be(B, C, H, W)
or(C, H, W)
. If it is a numpy array or a list of arrays, the expected shape should be(B, H, W, C)
or(H, W, C)
It can also accept image latents asimage
, but if passing latents directly it is not encoded again. - mask_image (
torch.Tensor
,PIL.Image.Image
,np.ndarray
,List[torch.Tensor]
,List[PIL.Image.Image]
, orList[np.ndarray]
) —Image
, numpy array or tensor representing an image batch to maskimage
. White pixels in the mask are repainted while black pixels are preserved. Ifmask_image
is a PIL image, it is converted to a single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one color channel (L) instead of 3, so the expected shape for pytorch tensor would be(B, 1, H, W)
,(B, H, W)
,(1, H, W)
,(H, W)
. And for numpy array would be for(B, H, W, 1)
,(B, H, W)
,(H, W, 1)
, or(H, W)
. - strength (
float
, optional, defaults to 1.0) — Indicates extent to transform the referenceimage
. Must be between 0 and 1.image
is used as a starting point and more noise is added the higher thestrength
. The number of denoising steps depends on the amount of noise initially added. Whenstrength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified innum_inference_steps
. A value of 1 essentially ignoresimage
. - num_inference_steps (
int
, optional, defaults to 16) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 10.0) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. - generator (
torch.Generator
, optional) — Atorch.Generator
to make generation deterministic. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. A single vector from the pooled and projected final hidden states. - encoder_hidden_states (
torch.Tensor
, optional) — Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. - negative_encoder_hidden_states (
torch.Tensor
, optional) — Analogous toencoder_hidden_states
for the positive prompt. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - callback (
Callable
, optional) — A function that calls everycallback_steps
steps during inference. The function is called with the following arguments:callback(step: int, timestep: int, latents: torch.Tensor)
. - callback_steps (
int
, optional, defaults to 1) — The frequency at which thecallback
function is called. If not specified, the callback is called at every step. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. - micro_conditioning_aesthetic_score (
int
, optional, defaults to 6) — The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ and the micro-conditioning section of https://arxiv.org/abs/2307.01952. - micro_conditioning_crop_coord (
Tuple[int]
, optional, defaults to (0, 0)) — The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. - temperature (
Union[int, Tuple[int, int], List[int]]
, optional, defaults to (2, 0)) — Configures the temperature scheduler onself.scheduler
seeAmusedScheduler#set_timesteps
.
Returns
ImagePipelineOutput or tuple
If return_dict
is True
, ImagePipelineOutput is returned, otherwise a
tuple
is returned where the first element is a list with the generated images.
The call function to the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import AmusedInpaintPipeline
>>> from diffusers.utils import load_image
>>> pipe = AmusedInpaintPipeline.from_pretrained(
... "amused/amused-512", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> prompt = "fall mountains"
>>> input_image = (
... load_image(
... "https://huggingface.co./datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg"
... )
... .resize((512, 512))
... .convert("RGB")
... )
>>> mask = (
... load_image(
... "https://huggingface.co./datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png"
... )
... .resize((512, 512))
... .convert("L")
... )
>>> pipe(prompt, input_image, mask).images[0].save("out.png")
enable_xformers_memory_efficient_attention
< source >( attention_op: typing.Optional[typing.Callable] = None )
Parameters
- attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.