AlekseyCalvin
commited on
Commit
•
c283f2d
1
Parent(s):
9e7b6dd
Update README.md
Browse files
README.md
CHANGED
@@ -61,12 +61,12 @@ Trained on Replicate using:
|
|
61 |
https://replicate.com/ostris/flux-dev-lora-trainer/train
|
62 |
|
63 |
Via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
|
64 |
-
|
65 |
These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
|
66 |
-
For
|
67 |
This first not-very-successful version of the resultent LoRA (check out V.2 [here](https://huggingface.co/AlekseyCalvin/Mayakovsky_Posters_2_5kSt)) was trained on regular old FLUX.1-Dev. <br>
|
68 |
-
|
69 |
-
No synthetic data was used
|
70 |
|
71 |
This is a **rank-32/alpha-32 Constructivist Art & Soviet Satirical Cartoon LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>
|
72 |
|
|
|
61 |
https://replicate.com/ostris/flux-dev-lora-trainer/train
|
62 |
|
63 |
Via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
|
64 |
+
Prior to this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
|
65 |
These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
|
66 |
+
For the given first version of the training, unlike Version 2 (linked below), we used auto-captions, and did not train the text encoder. <br>
|
67 |
This first not-very-successful version of the resultent LoRA (check out V.2 [here](https://huggingface.co/AlekseyCalvin/Mayakovsky_Posters_2_5kSt)) was trained on regular old FLUX.1-Dev. <br>
|
68 |
+
On this run, the training went for mere 1500 steps at a DiT Learning Rate of .0004, batch 3, with the Adamw8bit optimizer!<br>
|
69 |
+
No synthetic data was used <br>
|
70 |
|
71 |
This is a **rank-32/alpha-32 Constructivist Art & Soviet Satirical Cartoon LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>
|
72 |
|