AnySomniumXl v2
Ketengan-Diffusion/AnySomniumXL v2
is a SDXL model that has been fine-tuned on stabilityai/stable-diffusion-xl-base-1.0.
Our Dataset Process Curation
Our dataset is scored using Pretrained CLIP+MLP Aesthetic Scoring model by https://github.com/christophschuhmann/improved-aesthetic-predictor, and We made adjusment into our script to detecting any text or watermark by utilizing OCR by pytesseract
This scoring method has scale between -1-100, we take the score threshold around 17 or 20 as minimum and 65-75 as maximum to pretain the 2D style of the dataset, Any images with text will returning -1 score. So any images with score below 17 or above 65 is deleted
The dataset curation proccess is using Nvidia T4 16GB Machine and takes about 2 days for curating 300.000 images.
Captioning process
We using Open Source Multimodal LLM as the captioning process which is resulting more complex result than using normal BLIP2. Any detail like the clothes, atmosphere, situation, scene, place, gender, skin, and others is generated by LLM.
This captioning process to captioning 33k images takes about 6 Days with NVIDIA Tesla A100 80GB PCIe. We still improving our script to generate caption faster. The minimum VRAM that required for this captioning process is 24GB VRAM which is not sufficient if we using NVIDIA Tesla T4 16GB
Tagging Process
We simply using booru tags, that retrieved from booru boards so this could be tagged by manually by human hence make this tags more accurate.
Training Process
AnySomniumXL v2 Technical Specifications:
Training per 1 Epoch 20 Epoch (Results from AnySomniumXL using Epoch 20) 1x Batch Size without gradient checkpointing
Learning Rate: 4e-7 Text Encoder with Natural Language Captioning by LLaVA 1.5 which is more complex than BLIP2
Trained with a bucket size of 1024x1024
Optimizer: Adafactor
LR Scheduler: Constant with warmup
Shuffle Caption: Yes
Clip Skip: 2
Trained with NVIDIA A100 80GB for an estimated 126 training Hours with 2 batch size
You can support me:
- on Ko-FI