MonsterMMORPG's picture
Update README.md
1db2427 verified
|
raw
history blame
4.02 kB

This is a training of a public LoRA style (4 seperate training each on 4x A6000).

Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.

Generated captions with multi-GPU batch Joycaption app.

I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.

I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning)

https://www.patreon.com/posts/110613301

Joycaption examples

I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render

https://www.patreon.com/posts/108992085

Gradio batch caption editor

The no caption dataset uses only ohwx 3d render as caption

I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs - 114 images

https://www.patreon.com/posts/110879657

Training configuration

Inconsistent Dataset Training

This is the first training I made with the below dataset

Inconsistent-Training-Dataset-Images-Grid.jpg

When you pay attention to the grid image above shared, you will see that the dataset is not consistent

It has total 114 images

This training total step count was 500 * 114 / 4 (4x GPU - batch size 1) = 14250

It took like 37 hours on 4x RTX A6000 GPU with slow config - faster config would take like half

There were 2 trainings made with this dataset. Epoch 500 checkpoints are named as below

SECourses_Style_Inconsistent_DATASET_NO_Captions.safetensors SECourses_Style_Inconsistent_DATASET_With_Captions.safetensors

Their checkpoints are saved in below folders

Training-Checkpoints-NO-Captions Training-Checkpoints-With-Captions

Its grid results are shared below

https://huggingface.co./MonsterMMORPG/3D-Cartoon-Style-FLUX/resolve/main/Inconsistent-Training-Dataset-Results-Grid-26100x23700px.jpg

When you pay attention to above image you will see that it has inconsistent results

1 : https://youtu.be/bupRePUOA18

FLUX: The First Ever Open Source txt2img Model Truly Beats Midjourney & Others - FLUX is Awaited SD3

image

2 : https://youtu.be/nySGu12Y05k

FLUX LoRA Training Simplified: From Zero to Hero with Kohya SS GUI (8GB GPU, Windows) Tutorial Guide

image

3 : https://youtu.be/-uhL2nW7Ddw

Blazing Fast & Ultra Cheap FLUX LoRA Training on Massed Compute & RunPod Tutorial - No GPU Required!

image

Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.

I got permission to share dataset but can't be used commercially.

Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages.

So far 450 epochs completed

Training progress