File size: 1,775 Bytes
97cdf26
 
 
 
 
 
 
 
 
 
 
 
80ccb99
 
 
 
a40bb25
 
 
 
 
 
 
d2893a6
 
 
 
 
a1d518c
 
 
8c0be53
 
 
83023f9
 
a1d518c
97cdf26
 
 
 
 
 
05e58a2
97cdf26
05e58a2
97cdf26
5b83194
 
 
 
 
95e4f94
 
 
 
99f423a
95e4f94
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
widget:
- text: monet,  a landscape of a snowy mountain region big clouds
  output:
    url: images/example_ul824c994.png
- text: >-
    monet, majestic cliffs overlooking a serene ocean, with dramatic rock
    formations bathed in soft light. The cliffs are painted in shades of green,
    ochre, and brown, contrasting with the smooth, flowing waves below,
    capturing the raw, natural beauty of the landscape
  output:
    url: images/example_kuktgiadt.png
- text: >-
    monet,  mountains far back, street lamps shines with warm yellow colors,
    black night
  output:
    url: images/example_k3nlagsga.png
- text: Monet garden scene with colorful flowers and reflections on water
  output:
    url: images/example_r2xalzxv0.png
- text: Monet snowy lakeside at sunset
  output:
    url: images/example_eztpwthdj.png
datasets:
- Aedancodes/monet_dataset

---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# LoRA text2image fine-tuning

These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the Aedancodes/monet_dataset dataset.

## Trigger words

> [!WARNING]
> **Trigger words:**  You should use `Monet` to trigger the image generation.


## Training details

```python
resolution=1024*1024
train batch_size = 1
max train steps = 1000
learning rate = 5e-5
lr scheduler = constant
mixed precision = fp16
8bit_adam

```