--- license: mit base_model: warp-ai/wuerstchen-prior datasets: - Aff4n20/ancient-coin-dataset tags: - wuerstchen - text-to-image - diffusers - diffusers-training - lora inference: true --- # LoRA Finetuning - Aff4n20/wuerstchen-ancient-coins This pipeline was finetuned from **warp-ai/wuerstchen-prior** on the **Aff4n20/ancient-coin-dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['inscription, IMP AVG DIVI F; bare head of Augustus left; in front palm; behind, winged caduceus']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = AutoPipelineForText2Image.from_pretrained( "warp-ai/wuerstchen", torch_dtype=torch.float16 ) # load lora weights from folder: pipeline.prior_pipe.load_lora_weights("Aff4n20/wuerstchen-ancient-coins", torch_dtype=torch.float16) image = pipeline(prompt=prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * LoRA rank: 4 * Epochs: 19 * Learning rate: 0.0001 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16 More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/aff4n20/text2image-fine-tune/runs/5ewvbkug).