Use the model without crediting the creator
Sell images they generate
Run on services that generate images for money
Run on Civitai
Share merges using this model
Sell this model or merges using this model
Have different permissions when sharing merges
for sampling methods, use Euler a (best), DDIM (second best), or DPM++ DM Karras
Step 1: download SAFETENSORS and VAE files.
Step 2: put the SAFETENSORS file under "stable-diffusion-webui\models\Stable-diffusion"
Step 3: put the VAE file under "stable-diffusion-webui\models\VAE"
Step 4: Done! enjoy the model
-use a minimal negative prompt for best results
-use Euler A and 20-25 steps for best results
-use danbooru tags
-I used a clip skip of 2 (optional)
-I also used the upscaler Latent (nearest-exact) with a highres step of 20 and denoise of 0.5 to improve image quality and detail (optional)
DO NOT USE A DENOISE STRENGTH BELOW 4.5 FOR BEST RESULTS
EXAMPLE PROMPT
((Masterpiece)), (best quality), (1girl), red hair, beautiful red eyes, medium breasts, classroom, black glasses, school uniform
The VAE is not required. you can use any VAE you like. I have found that VAE makes more vibrant and crisp images, but this is just my testing
for the VAE I used the same one as grapefruit
I will try to improve and update this model by adding other images, although I'm not too familiar with SD and training models, so I will most likely stick to only merging models.
_______
-Loras have not been tested yet, but they should most likely work
-use the upscaler Latent (nearest-exact) for best results
- I will try to update the model at least once per week
-Image generation on the A1111 WebUI normally took around ~10 seconds using a 3080 TI with 12 gigabytes of RAM