Edit model card

LoRA text2image fine-tuning - https://huggingface.co./pcuenq/pokemon-lora

These are LoRA adaption weights trained on base model https://huggingface.co./runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset.

How to Use

The script below loads the base model, then applies the LoRA weights and performs inference:

import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
from huggingface_hub import model_info

# LoRA weights ~3 MB
model_path = "pcuenq/pokemon-lora"

info = model_info(model_path)
model_base = info.cardData["base_model"]
pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)

pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")

image = pipe("Green pokemon with menacing face", num_inference_steps=25).images[0]
image.save("green_pokemon.png")

Please, check our blog post or documentation for more details.

Example Images

img_0 img_1 img_2 img_3

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for pcuenq/pokemon-lora

Finetuned
(598)
this model

Spaces using pcuenq/pokemon-lora 3