Unable to reproduce the results

#1
by Kenkai - opened

Hey there, thanks for sharing this embedding. I have used it a bit and results are overall great.

Just wanted to know how did you manage to create images shown in the model card. I tried using this embedding with AnythingV3, and exact same prompt ('solo') and other params including seed as mentioned there, but results from those were not even close to as good as the ones shown.

TIA

For pure anything, yes just 'solo' won't work. Anything is a model that, no matter what negative prompt you use, you will need a stronger positive prompt (but not by much). For example, adding 'masterpiece, best quality and 1girl' gives me much better results.

masterpiece, best quality, solo 1girl
Negative prompt: sketch by bad-artist
Steps: 10, Sampler: DPM++ 2M Karras, CFG scale: 4, Seed: 3638101479, Size: 448x576, Model hash: 7ab762a7, Model: anything, Batch size: 9, Batch pos: 0, Clip skip: 2

TdSzvoG0IX.png

If you use the model that I trained on, you can go with just 'solo'. The model in question is 'blossom-extract', you can make it by following the 1-step recipie:
[Add Difference, A=AnythingV3, B=F222, C=SD1.4, M=1.0].

Gotcha , thanks for the explanation and examples. Really love the results I'm getting with this <3

Hi, I've been on a journey to increase the quality as I have seen on other posts but copying the settings always seems to give a lower quality result than the post. I am also trying to get the results of the model card image, but the quality seems worse than that?
I am using Anythingv3 and this embedding with the same settings.
stablediffusion.png
Another attempt with the settings you provided above.
stablediffusion.png

You are using pure anything, so you want CLIP Skip : 2 (settings -> stable diffusion).

masterpiece, best quality, solo 1girl
Negative prompt: sketch by bad-artist
Steps: 10, Sampler: DPM++ 2M Karras, CFG scale: 4, Seed: 3638101479, Size: 448x576, Clip skip: 2

Paste that whole thing into your positive prompt, and click the blue arrow emoji on the right.

Edit: also it looks like you are not using the VAE. download the .vae.pt file from anything-v3 huggingface and put it into models/VAE. then go into settings -> stable diffusion and select the VAE you just downloaded and save.

Sign up or log in to comment