metadata
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: an icon of trpfrog
widget:
- text: an icon of trpfrog eating ramen
output:
url: image_0.png
- text: an icon of trpfrog eating ramen
output:
url: image_1.png
- text: an icon of trpfrog eating ramen
output:
url: image_2.png
- text: an icon of trpfrog eating ramen
output:
url: image_3.png
- text: an icon of trpfrog eating ramen
output:
url: image_4.png
- text: an icon of trpfrog eating ramen
output:
url: image_5.png
- text: an icon of trpfrog eating ramen
output:
url: image_6.png
- text: an icon of trpfrog eating ramen
output:
url: image_7.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
datasets:
- trpfrog/trpfrog-icons
- Prgckwb/trpfrog-icons-dreambooth
SDXL LoRA DreamBooth - Prgckwb/trpfrog-sdxl-lora
Model description
These are Prgckwb/trpfrog-sdxl-lora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using DreamBooth.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Trigger words
You should use an icon of "trpfrog" to trigger the image generation.
How to use
from diffusers import DiffusionPipeline
import torch
base_model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
lora_model_id = 'Prgckwb/trpfrog-sdxl-lora'
pipe = DiffusionPipeline.from_pretrained(
base_model_id, torch_dtype=torch.float16
).to("cuda")
pipe.load_lora_weights(lora_model_id)
image = pipe(
"an icon of trpfrog",
num_inference_steps=25
).images[0]
image.save('trpfrog.png')