End of training
Browse files- .gitattributes +4 -0
- README.md +96 -0
- config.yaml +34 -0
- image_0.png +3 -0
- image_1.png +3 -0
- image_2.png +3 -0
- image_3.png +3 -0
- pruebinha_adv.safetensors +3 -0
- pruebinha_adv_emb.safetensors +3 -0
- pytorch_lora_weights.safetensors +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
image_0.png filter=lfs diff=lfs merge=lfs -text
|
37 |
+
image_1.png filter=lfs diff=lfs merge=lfs -text
|
38 |
+
image_2.png filter=lfs diff=lfs merge=lfs -text
|
39 |
+
image_3.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- stable-diffusion-xl
|
4 |
+
- stable-diffusion-xl-diffusers
|
5 |
+
- diffusers-training
|
6 |
+
- text-to-image
|
7 |
+
- diffusers
|
8 |
+
- lora
|
9 |
+
- template:sd-lora
|
10 |
+
widget:
|
11 |
+
|
12 |
+
- text: 'selfie photo of <s0><s1> woman wearing nice clothes'
|
13 |
+
output:
|
14 |
+
url:
|
15 |
+
"image_0.png"
|
16 |
+
|
17 |
+
- text: 'selfie photo of <s0><s1> woman wearing nice clothes'
|
18 |
+
output:
|
19 |
+
url:
|
20 |
+
"image_1.png"
|
21 |
+
|
22 |
+
- text: 'selfie photo of <s0><s1> woman wearing nice clothes'
|
23 |
+
output:
|
24 |
+
url:
|
25 |
+
"image_2.png"
|
26 |
+
|
27 |
+
- text: 'selfie photo of <s0><s1> woman wearing nice clothes'
|
28 |
+
output:
|
29 |
+
url:
|
30 |
+
"image_3.png"
|
31 |
+
|
32 |
+
base_model: stabilityai/stable-diffusion-xl-base-1.0
|
33 |
+
instance_prompt: <s0><s1> woman
|
34 |
+
license: openrail++
|
35 |
+
---
|
36 |
+
|
37 |
+
# SDXL LoRA DreamBooth - jrochafe/pruebinha_adv
|
38 |
+
|
39 |
+
<Gallery />
|
40 |
+
|
41 |
+
## Model description
|
42 |
+
|
43 |
+
### These are jrochafe/pruebinha_adv LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
|
44 |
+
|
45 |
+
## Download model
|
46 |
+
|
47 |
+
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
|
48 |
+
|
49 |
+
- **LoRA**: download **[`pruebinha_adv.safetensors` here 💾](/jrochafe/pruebinha_adv/blob/main/pruebinha_adv.safetensors)**.
|
50 |
+
- Place it on your `models/Lora` folder.
|
51 |
+
- On AUTOMATIC1111, load the LoRA by adding `<lora:pruebinha_adv:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
|
52 |
+
- *Embeddings*: download **[`pruebinha_adv_emb.safetensors` here 💾](/jrochafe/pruebinha_adv/blob/main/pruebinha_adv_emb.safetensors)**.
|
53 |
+
- Place it on it on your `embeddings` folder
|
54 |
+
- Use it by adding `pruebinha_adv_emb` to your prompt. For example, `pruebinha_adv_emb woman`
|
55 |
+
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
|
56 |
+
|
57 |
+
|
58 |
+
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
|
59 |
+
|
60 |
+
```py
|
61 |
+
from diffusers import AutoPipelineForText2Image
|
62 |
+
import torch
|
63 |
+
from huggingface_hub import hf_hub_download
|
64 |
+
from safetensors.torch import load_file
|
65 |
+
|
66 |
+
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
|
67 |
+
pipeline.load_lora_weights('jrochafe/pruebinha_adv', weight_name='pytorch_lora_weights.safetensors')
|
68 |
+
embedding_path = hf_hub_download(repo_id='jrochafe/pruebinha_adv', filename='pruebinha_adv_emb.safetensors', repo_type="model")
|
69 |
+
state_dict = load_file(embedding_path)
|
70 |
+
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
|
71 |
+
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
|
72 |
+
|
73 |
+
image = pipeline('selfie photo of <s0><s1> woman wearing nice clothes').images[0]
|
74 |
+
```
|
75 |
+
|
76 |
+
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
77 |
+
|
78 |
+
## Trigger words
|
79 |
+
|
80 |
+
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
|
81 |
+
|
82 |
+
to trigger concept `http` → use `<s0><s1>` in your prompt
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
## Details
|
87 |
+
All [Files & versions](/jrochafe/pruebinha_adv/tree/main).
|
88 |
+
|
89 |
+
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
|
90 |
+
|
91 |
+
LoRA for the text encoder was enabled. False.
|
92 |
+
|
93 |
+
Pivotal tuning was enabled: True.
|
94 |
+
|
95 |
+
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
96 |
+
|
config.yaml
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
pretrained_vae_model_name_or_path: madebyollin/sdxl-vae-fp16-fix
|
2 |
+
pretrained_model_name_or_path: stabilityai/stable-diffusion-xl-base-1.0
|
3 |
+
dataset_name: /home/usr_9106259_ulta_com/illiad/data/identity/pola
|
4 |
+
instance_prompt: "http woman"
|
5 |
+
token_abstraction: http
|
6 |
+
validation_prompt: "selfie photo of http woman wearing nice clothes"
|
7 |
+
output_dir: pruebinha_adv
|
8 |
+
caption_column: prompt
|
9 |
+
mixed_precision: fp16
|
10 |
+
resolution: 1024
|
11 |
+
train_batch_size: 3
|
12 |
+
repeats: 1
|
13 |
+
rank: 32
|
14 |
+
report_to: tensorboard
|
15 |
+
gradient_accumulation_steps: 1
|
16 |
+
gradient_checkpointing: true
|
17 |
+
learning_rate: 1.0
|
18 |
+
text_encoder_lr: 1.0
|
19 |
+
adam_beta2: 0.99
|
20 |
+
optimizer: prodigy
|
21 |
+
train_text_encoder_ti: true
|
22 |
+
train_text_encoder_ti_frac: 0.5
|
23 |
+
snr_gamma: 5.0
|
24 |
+
lr_scheduler: constant
|
25 |
+
lr_warmup_steps: 0
|
26 |
+
max_train_steps: 20
|
27 |
+
checkpointing_steps: 251
|
28 |
+
seed: 0
|
29 |
+
with_prior_preservation: true
|
30 |
+
class_data_dir: /home/usr_9106259_ulta_com/illiad/data/identity/class_images/a_selfie_of_a_woman
|
31 |
+
class_prompt: "a selfie photo of a woman"
|
32 |
+
num_class_images: 200
|
33 |
+
push_to_hub: true
|
34 |
+
hub_model_id: jrochafe/pruebinha_adv
|
image_0.png
ADDED
Git LFS Details
|
image_1.png
ADDED
Git LFS Details
|
image_2.png
ADDED
Git LFS Details
|
image_3.png
ADDED
Git LFS Details
|
pruebinha_adv.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26716cdc3bacef611be09ebdb105381fd96b1f404bceefe1a8059433db8bd0aa
|
3 |
+
size 186046568
|
pruebinha_adv_emb.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f1c28eaaf36d8c2609c17291e33a6f646cd7baa0f580eb350e2acd4338b59fbc
|
3 |
+
size 16536
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0908177fb5f648e37d23d74feb8b82708d2cd7b1224aee27da46a4954aa70bac
|
3 |
+
size 185963768
|