AnySomniumXL-v3.5 / README.md
Ketengan-Diffusion's picture
Update README.md
034e7d7 verified
metadata
license: creativeml-openrail-m
language:
  - en
tags:
  - stable-diffusion
  - SDXL
  - art
  - stable-diffusion-XL
  - fantasy
  - anime
  - aiart
  - ketengan
  - AnySomniumXL
pipeline_tag: text-to-image
library_name: diffusers

AnySomniumXL v3.5 Model Showcase

Ketengan-Diffusion/AnySomniumXL v3.5 is a SDXL model that has been fine-tuned on stabilityai/stable-diffusion-xl-base-1.0.

This is enhanced version of AnySomniumXL v3

Changelog over AnySomniumXL v3

  • Better captioning process
  • Better model generalizing
  • Increased concept and character accuracy
  • Better stylizing on untrained token

Our Dataset Process Curation

Our Dataset Process Curation

Image source: Source1 Source2 Source3

Our dataset is scored using Pretrained CLIP+MLP Aesthetic Scoring model by https://github.com/christophschuhmann/improved-aesthetic-predictor, and We made adjusment into our script to detecting any text or watermark by utilizing OCR by pytesseract

This scoring method has scale between -1-100, we take the score threshold around 17 or 20 as minimum and 65-75 as maximum to pretain the 2D style of the dataset, Any images with text will returning -1 score. So any images with score below 17 or above 65 is deleted

The dataset curation proccess is using Nvidia T4 16GB Machine and takes about 7 days for curating 1.000.000 images.

Our dataset is scored using Pretrained CLIP+MLP Aesthetic Scoring model by https://github.com/christophschuhmann/improved-aesthetic-predictor, and We made adjusment into our script to detecting any text or watermark by utilizing OCR by pytesseract

This scoring method has scale between -1-100, we take the score threshold around 17 or 20 as minimum and 65-75 as maximum to pretain the 2D style of the dataset, Any images with text will returning -1 score. So any images with score below 17 or above 65 is deleted

The dataset curation proccess is using Nvidia T4 16GB Machine and takes about 2 days for curating 300.000 images.

Captioning process

We using combination of proprietary Multimodal LLM and open source multimodal LLM such as LLaVa 1.5 as the captioning process which is resulting more complex result than using normal BLIP2. Any detail like the clothes, atmosphere, situation, scene, place, gender, skin, and others is generated by LLM.

This captioning process to captioning 133k images takes about 6 Days with NVIDIA Tesla A100 80GB PCIe. We still improving our script to generate caption faster. The minimum VRAM that required for this captioning process is 24GB VRAM which is not sufficient if we using NVIDIA Tesla T4 16GB

Tagging Process

We simply using booru tags, that retrieved from booru boards so this could be tagged by manually by human hence make this tags more accurate.

Official Demo

You can try our AnySomniumXL v3 for free on demo.ketengan.com

Training Process

AnySomniumXL v3.5 Technical Specifications:

Batch Size: 25

Learning rate: 2e-6

Trained with a bucket size of 1280x1280

Shuffle Caption: Yes

Clip Skip: 2

Trained with 2x NVIDIA A100 80GB

Recommended Resolution

Because it's trained with 1280x1280 resolution, so here the best resolution to get the full power of AnySomniumXL v3

  • 1280x1280
  • 1472x1088
  • 1152x1408
  • 1536x1024
  • 1856x832
  • 1024x1600

You can support me: