AlekseyCalvin commited on
Commit
95ffc42
1 Parent(s): aae9243

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -26,7 +26,7 @@ pipeline_tag: text-to-image
26
  library_name: diffusers
27
  emoji: 🔜
28
 
29
- instance_prompt: vintage photograph of Anna AKHMATOVA, blemished skin texture with slight wrinkles
30
 
31
  widget:
32
  - text: MAYAK style drawing of Osip Mandelshtam reciting /OH, BUT PETERSBURG! NO! IM NOT READY TO DIE! YOU STILL HOLD ALL THE TELEPHONE NUMBERS OF MINE!/
@@ -48,13 +48,13 @@ widget:
48
  ---
49
  <Gallery />
50
 
51
- # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA(v.1) by SOON®
52
  Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
53
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
54
  These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
55
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
56
  This version of the resultent LoRA was trained on our custom Schnell-based checkpoint (Historic Color 2), available [here in fp8 Safetensors](https://huggingface.co/AlekseyCalvin/HistoricColorSoonrFluxV2/tree/main) and [here in Diffusers format](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers). <br>
57
- The training went for 5000 steps at a DiT Learning Rate of .00002, batch 1, with the ademamix8bit optimizer!<br>
58
  No synthetic data was used for the training, nor any auto-generated captions! Everything was manually and attentively pre-curated with a deep respect for the sources used. <br>
59
 
60
  This is a **rank-32/alpha-64 Constructivist Art & Soviet Satirical Cartoon LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>
 
26
  library_name: diffusers
27
  emoji: 🔜
28
 
29
+ instance_prompt: MAYAK style Constructivist Poster
30
 
31
  widget:
32
  - text: MAYAK style drawing of Osip Mandelshtam reciting /OH, BUT PETERSBURG! NO! IM NOT READY TO DIE! YOU STILL HOLD ALL THE TELEPHONE NUMBERS OF MINE!/
 
48
  ---
49
  <Gallery />
50
 
51
+ # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA – Version 2 – by SOON®
52
  Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
53
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
54
  These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
55
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
56
  This version of the resultent LoRA was trained on our custom Schnell-based checkpoint (Historic Color 2), available [here in fp8 Safetensors](https://huggingface.co/AlekseyCalvin/HistoricColorSoonrFluxV2/tree/main) and [here in Diffusers format](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_v2_FluxSchnell_Diffusers). <br>
57
+ The training went for 5000 steps at a DiT Learning Rate of .00002, batch 1, with the ademamix8bit optimizer, and both text encoders trained alongside the DiT!<br>
58
  No synthetic data was used for the training, nor any auto-generated captions! Everything was manually and attentively pre-curated with a deep respect for the sources used. <br>
59
 
60
  This is a **rank-32/alpha-64 Constructivist Art & Soviet Satirical Cartoon LoRA for Flux** (whether of a [Dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), a [Schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell), or a [Soon®](https://huggingface.co/AlekseyCalvin/HistoricColorSoonr_Schnell) sort...) <br>