Source
The controlnets canny_control
and lineart_anime_control
were converted to diffusers
format directly from the v1.1 originals in https://huggingface.co./lllyasviel
Try Style Controlnet with A1111 WebUI
Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing A1111 Webui Controlnet Extension, see this reddit post for instructions on using the controlnet extension.
For each anime controlnet there is a standard variant, and a no-hint variant.
Pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See assets
folder for example hints.
Generated using anime_control_dreamshaper.safetensors
controlnet with canny hint
Generated using anime_control_dreamshaper.safetensors
controlnet with black square (numpy array of zeros) as hint
Generated using anime_styler_dreamshaper.safetensors
controlnet with no controlnet conditioning hint
Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed)
Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults
Base model used for examples: Dreamshaper
Details
These controlnets were initialized from a distinct UNet (andite/anything-v4.5
), and predominantly trained without any controlnet conditioning image on a synthetically generated anime image dataset from the base model. (see lint/anybooru
for a subset example of the training set). Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.
- Downloads last month
- 0