--- license: openrail --- ## Source The controlnets `canny_control` and `lineart_anime_control` were converted to `diffusers` format directly from the v1.1 originals in https://huggingface.co./lllyasviel ## [Try Style Controlnet with A1111 WebUI](https://github.com/1lint/style_controlnet) Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet), see this reddit post for [instructions](https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) on using the controlnet extension. For each anime controlnet there is a standard variant, and a no-hint variant. ### Pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See `assets` folder for example hints. _________________________________________________ ### Generated using `anime_control_dreamshaper.safetensors` controlnet with canny hint ![](./assets/hint_grid.png) _________________________________________________ ### Generated using `anime_control_dreamshaper.safetensors` controlnet with black square (numpy array of zeros) as hint ![](./assets/zerohint_grid.png) _________________________________________________ ### Generated using `anime_styler_dreamshaper.safetensors` controlnet with no controlnet conditioning hint ![](./assets/nohint_grid.png) _________________________________________________ ### Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed) Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults Base model used for examples: [Dreamshaper](https://civitai.com/models/4384/dreamshaper) _________________________________________________ ## Details These controlnets were initialized from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a synthetically generated anime image dataset from the base model. (see `lint/anybooru` for a subset example of the training set). Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.