MonsterMMORPG commited on
Commit
447b69e
1 Parent(s): c6038ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -6,17 +6,17 @@ Generated captions with multi-GPU batch Joycaption app.
6
 
7
  I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.
8
 
9
- I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301
10
 
11
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/LTfUYHXCpcwzt3_us0R26.png)
12
 
13
- I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085
14
 
15
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/BleDJpEMrCMXXRTCPKJqb.png)
16
 
17
  The no caption dataset uses only ohwx 3d render as caption
18
 
19
- I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs - 114 images : https://www.patreon.com/posts/110879657
20
 
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/jK75d8i1x5hAHSYSsJNBd.png)
22
 
@@ -26,9 +26,9 @@ Taking 37 hours currently if I don't terminate early
26
 
27
  Will save a checkpoint once every 25 epochs
28
 
29
- Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k
30
 
31
- Full Cloud Kohya LoRA training tutorial (Massed Compute + RunPod) : https://youtu.be/-uhL2nW7Ddw
32
 
33
  Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.
34
 
@@ -38,4 +38,5 @@ Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA p
38
 
39
  So far 450 epochs completed
40
 
 
41
 
 
6
 
7
  I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.
8
 
9
+ # I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301
10
 
11
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/LTfUYHXCpcwzt3_us0R26.png)
12
 
13
+ # I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085
14
 
15
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/BleDJpEMrCMXXRTCPKJqb.png)
16
 
17
  The no caption dataset uses only ohwx 3d render as caption
18
 
19
+ # I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs - 114 images : https://www.patreon.com/posts/110879657
20
 
21
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/jK75d8i1x5hAHSYSsJNBd.png)
22
 
 
26
 
27
  Will save a checkpoint once every 25 epochs
28
 
29
+ # Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k
30
 
31
+ # Full Cloud Kohya LoRA training tutorial (Massed Compute + RunPod) : https://youtu.be/-uhL2nW7Ddw
32
 
33
  Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.
34
 
 
38
 
39
  So far 450 epochs completed
40
 
41
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/7ZFz_ZW53ipp8LHYuPPSg.png)
42