1lint
commited on
Commit
·
a3a1d13
1
Parent(s):
c0c9b65
consolidate controlnets
Browse files- A1111_controlnet_extension_weights/anime_control_dreamshaper.safetensors +3 -0
- A1111_controlnet_extension_weights/anime_control_neverending.safetensors +3 -0
- A1111_controlnet_extension_weights/anime_control_protogen.safetensors +3 -0
- A1111_controlnet_extension_weights/anime_control_realdosmix.safetensors +3 -0
- README.md +37 -0
- anime_dream/config.json +52 -0
- anime_dream/diffusion_pytorch_model.bin +3 -0
- anime_dreamshaper/config.json +45 -0
- anime_dreamshaper/diffusion_pytorch_model.safetensors +3 -0
- anime_neverending/config.json +44 -0
- anime_neverending/diffusion_pytorch_model.safetensors +3 -0
- anime_protogen/config.json +52 -0
- anime_protogen/diffusion_pytorch_model.safetensors +3 -0
- anime_realdosmix/config.json +45 -0
- anime_realdosmix/diffusion_pytorch_model.safetensors +3 -0
- anime_vinteprotomix/config.json +45 -0
- anime_vinteprotomix/diffusion_pytorch_model.safetensors +3 -0
- assets/black.png +0 -0
- assets/hint.png +0 -0
- assets/hint_grid.png +0 -0
- assets/nohint_grid.png +0 -0
- assets/zerohint_grid.png +0 -0
- lineart_anime_control/config.json +41 -0
- lineart_anime_control/diffusion_pytorch_model.bin +3 -0
A1111_controlnet_extension_weights/anime_control_dreamshaper.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26b896ee1b5ae1321f055bbd712cfa2b7eb90d7096c80b64973ee33119a89204
|
3 |
+
size 722596338
|
A1111_controlnet_extension_weights/anime_control_neverending.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c37cbd4bd653ea420e848a344fe5ffaacfb60d986b97521759c9de3d3a48bb6e
|
3 |
+
size 722596338
|
A1111_controlnet_extension_weights/anime_control_protogen.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2618357f330a5d4faa0e034011a637374c587031444469ea85b0676a7ec52bb7
|
3 |
+
size 722596338
|
A1111_controlnet_extension_weights/anime_control_realdosmix.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:407ced814fb5701f8e3c6d28dcaff173055202299262974d13626ccabc938071
|
3 |
+
size 722596338
|
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: openrail
|
3 |
+
---
|
4 |
+
## Source
|
5 |
+
|
6 |
+
The controlnets `canny_control` and `lineart_anime_control` were converted to `diffusers` format directly from the v1.1 originals in https://huggingface.co/lllyasviel
|
7 |
+
|
8 |
+
## [Try Style Controlnet with A1111 WebUI](https://github.com/1lint/style_controlnet)
|
9 |
+
|
10 |
+
Use anime styling controlnet with A1111 Stable Diffusion WebUI by downloading the weights from the A1111_webui_weights folder inside this repository. These weights can be used directly with the existing [A1111 Webui Controlnet Extension](https://github.com/Mikubill/sd-webui-controlnet), see this reddit post for [instructions](https://www.reddit.com/r/StableDiffusion/comments/119o71b/a1111_controlnet_extension_explained_like_youre_5/) on using the controlnet extension.
|
11 |
+
|
12 |
+
For each anime controlnet there is a standard variant, and a no-hint variant.
|
13 |
+
|
14 |
+
### Pass a black square as the controlnet conditioning image if you only want to add anime style guidance to image generation, or pass an anime image with canny preprocessing if you want to add both anime style and canny guidance to the image. See `assets` folder for example hints.
|
15 |
+
_________________________________________________
|
16 |
+
|
17 |
+
### Generated using `anime_control_dreamshaper.safetensors` controlnet with canny hint
|
18 |
+
![](./assets/hint_grid.png)
|
19 |
+
_________________________________________________
|
20 |
+
### Generated using `anime_control_dreamshaper.safetensors` controlnet with black square (numpy array of zeros) as hint
|
21 |
+
![](./assets/zerohint_grid.png)
|
22 |
+
_________________________________________________
|
23 |
+
### Generated using `anime_styler_dreamshaper.safetensors` controlnet with no controlnet conditioning hint
|
24 |
+
![](./assets/nohint_grid.png)
|
25 |
+
_________________________________________________
|
26 |
+
|
27 |
+
### Grid from left to right: Controlnet weight 0.0 (base model output), Controlnet weight 0.5, Controlnet weight 1.0, Controlnet hint (white means no controlnet hint passed)
|
28 |
+
|
29 |
+
Generation settings for examples: Prompt: "1girl, blue eyes", Seed: 2048, all other settings are A1111 Webui defaults
|
30 |
+
|
31 |
+
Base model used for examples: [Dreamshaper](https://civitai.com/models/4384/dreamshaper)
|
32 |
+
_________________________________________________
|
33 |
+
|
34 |
+
## Details
|
35 |
+
|
36 |
+
These controlnets were initialized from a distinct UNet (`andite/anything-v4.5`), and predominantly trained without any controlnet conditioning image on a synthetically generated anime image dataset from the base model. (see `lint/anybooru` for a subset example of the training set). Then the main controlnet weights were frozen, the input hint block weights added back in and trained on the same dataset using canny image processing to generate the controlnet conditioning image.
|
37 |
+
|
anime_dream/config.json
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"_name_or_path": "models/neverendingDreamNED_bakedVae_animestyler/checkpoint-24000",
|
5 |
+
"act_fn": "silu",
|
6 |
+
"attention_head_dim": 8,
|
7 |
+
"block_out_channels": [
|
8 |
+
320,
|
9 |
+
640,
|
10 |
+
1280,
|
11 |
+
1280
|
12 |
+
],
|
13 |
+
"center_input_sample": false,
|
14 |
+
"class_embed_type": null,
|
15 |
+
"conditioning_embedding_out_channels": [
|
16 |
+
16,
|
17 |
+
32,
|
18 |
+
96,
|
19 |
+
256
|
20 |
+
],
|
21 |
+
"controlnet_conditioning_channel_order": "rgb",
|
22 |
+
"controlnet_conditioning_channels": 0,
|
23 |
+
"controlnet_conditioning_embedding_type": "null",
|
24 |
+
"conv_in_kernel": 3,
|
25 |
+
"cross_attention_dim": 768,
|
26 |
+
"down_block_types": [
|
27 |
+
"CrossAttnDownBlock2D",
|
28 |
+
"CrossAttnDownBlock2D",
|
29 |
+
"CrossAttnDownBlock2D",
|
30 |
+
"DownBlock2D"
|
31 |
+
],
|
32 |
+
"downsample_padding": 1,
|
33 |
+
"dual_cross_attention": false,
|
34 |
+
"flip_sin_to_cos": true,
|
35 |
+
"freq_shift": 0,
|
36 |
+
"in_channels": 4,
|
37 |
+
"layers_per_block": 2,
|
38 |
+
"mid_block_scale_factor": 1,
|
39 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
40 |
+
"norm_eps": 1e-05,
|
41 |
+
"norm_num_groups": 32,
|
42 |
+
"num_class_embeds": null,
|
43 |
+
"only_cross_attention": false,
|
44 |
+
"projection_class_embeddings_input_dim": null,
|
45 |
+
"resnet_time_scale_shift": "default",
|
46 |
+
"sample_size": 64,
|
47 |
+
"time_cond_proj_dim": null,
|
48 |
+
"time_embedding_type": "positional",
|
49 |
+
"timestep_post_act": null,
|
50 |
+
"upcast_attention": false,
|
51 |
+
"use_linear_projection": false
|
52 |
+
}
|
anime_dream/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:47ab156c099f97e6f97952acc86a08dd22ccd72809d68406533afc7c95cdb4d7
|
3 |
+
size 722672331
|
anime_dreamshaper/config.json
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"_name_or_path": "/home/user/style_controlnet/models/dreamshaper_331BakedVae_animestyler/checkpoint-320000",
|
5 |
+
"act_fn": "silu",
|
6 |
+
"attention_head_dim": 8,
|
7 |
+
"block_out_channels": [
|
8 |
+
320,
|
9 |
+
640,
|
10 |
+
1280,
|
11 |
+
1280
|
12 |
+
],
|
13 |
+
"center_input_sample": false,
|
14 |
+
"class_embed_type": null,
|
15 |
+
"controlnet_conditioning_channels": 0,
|
16 |
+
"controlnet_conditioning_embedding_type": "null",
|
17 |
+
"conv_in_kernel": 3,
|
18 |
+
"cross_attention_dim": 768,
|
19 |
+
"down_block_types": [
|
20 |
+
"CrossAttnDownBlock2D",
|
21 |
+
"CrossAttnDownBlock2D",
|
22 |
+
"CrossAttnDownBlock2D",
|
23 |
+
"DownBlock2D"
|
24 |
+
],
|
25 |
+
"downsample_padding": 1,
|
26 |
+
"dual_cross_attention": false,
|
27 |
+
"flip_sin_to_cos": true,
|
28 |
+
"freq_shift": 0,
|
29 |
+
"in_channels": 4,
|
30 |
+
"layers_per_block": 2,
|
31 |
+
"mid_block_scale_factor": 1,
|
32 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
33 |
+
"norm_eps": 1e-05,
|
34 |
+
"norm_num_groups": 32,
|
35 |
+
"num_class_embeds": null,
|
36 |
+
"only_cross_attention": false,
|
37 |
+
"projection_class_embeddings_input_dim": null,
|
38 |
+
"resnet_time_scale_shift": "default",
|
39 |
+
"sample_size": 64,
|
40 |
+
"time_cond_proj_dim": null,
|
41 |
+
"time_embedding_type": "positional",
|
42 |
+
"timestep_post_act": null,
|
43 |
+
"upcast_attention": false,
|
44 |
+
"use_linear_projection": false
|
45 |
+
}
|
anime_dreamshaper/diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d0fdd80f915fbe7f61a31cb51b0d65653dd36ae4bf2dcc669bd6c4fe6ee99eba
|
3 |
+
size 720423957
|
anime_neverending/config.json
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"attention_head_dim": 8,
|
6 |
+
"block_out_channels": [
|
7 |
+
320,
|
8 |
+
640,
|
9 |
+
1280,
|
10 |
+
1280
|
11 |
+
],
|
12 |
+
"center_input_sample": false,
|
13 |
+
"class_embed_type": null,
|
14 |
+
"controlnet_conditioning_channels": 3,
|
15 |
+
"controlnet_conditioning_embedding_type": "null",
|
16 |
+
"conv_in_kernel": 3,
|
17 |
+
"cross_attention_dim": 768,
|
18 |
+
"down_block_types": [
|
19 |
+
"CrossAttnDownBlock2D",
|
20 |
+
"CrossAttnDownBlock2D",
|
21 |
+
"CrossAttnDownBlock2D",
|
22 |
+
"DownBlock2D"
|
23 |
+
],
|
24 |
+
"downsample_padding": 1,
|
25 |
+
"dual_cross_attention": false,
|
26 |
+
"flip_sin_to_cos": true,
|
27 |
+
"freq_shift": 0,
|
28 |
+
"in_channels": 4,
|
29 |
+
"layers_per_block": 2,
|
30 |
+
"mid_block_scale_factor": 1,
|
31 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
32 |
+
"norm_eps": 1e-05,
|
33 |
+
"norm_num_groups": 32,
|
34 |
+
"num_class_embeds": null,
|
35 |
+
"only_cross_attention": false,
|
36 |
+
"projection_class_embeddings_input_dim": null,
|
37 |
+
"resnet_time_scale_shift": "default",
|
38 |
+
"sample_size": 64,
|
39 |
+
"time_cond_proj_dim": null,
|
40 |
+
"time_embedding_type": "positional",
|
41 |
+
"timestep_post_act": null,
|
42 |
+
"upcast_attention": false,
|
43 |
+
"use_linear_projection": false
|
44 |
+
}
|
anime_neverending/diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5de72b8f81788c7d8ebbd514c15f3ec73df8246ab86275b71dcedcd90c3f1035
|
3 |
+
size 1440809480
|
anime_protogen/config.json
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"_name_or_path": "models/protogenX58RebuiltScifi_10_animestyler/checkpoint-57544",
|
5 |
+
"act_fn": "silu",
|
6 |
+
"attention_head_dim": 8,
|
7 |
+
"block_out_channels": [
|
8 |
+
320,
|
9 |
+
640,
|
10 |
+
1280,
|
11 |
+
1280
|
12 |
+
],
|
13 |
+
"center_input_sample": false,
|
14 |
+
"class_embed_type": null,
|
15 |
+
"conditioning_embedding_out_channels": [
|
16 |
+
16,
|
17 |
+
32,
|
18 |
+
96,
|
19 |
+
256
|
20 |
+
],
|
21 |
+
"controlnet_conditioning_channel_order": "rgb",
|
22 |
+
"controlnet_conditioning_channels": 0,
|
23 |
+
"controlnet_conditioning_embedding_type": "null",
|
24 |
+
"conv_in_kernel": 3,
|
25 |
+
"cross_attention_dim": 768,
|
26 |
+
"down_block_types": [
|
27 |
+
"CrossAttnDownBlock2D",
|
28 |
+
"CrossAttnDownBlock2D",
|
29 |
+
"CrossAttnDownBlock2D",
|
30 |
+
"DownBlock2D"
|
31 |
+
],
|
32 |
+
"downsample_padding": 1,
|
33 |
+
"dual_cross_attention": false,
|
34 |
+
"flip_sin_to_cos": true,
|
35 |
+
"freq_shift": 0,
|
36 |
+
"in_channels": 4,
|
37 |
+
"layers_per_block": 2,
|
38 |
+
"mid_block_scale_factor": 1,
|
39 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
40 |
+
"norm_eps": 1e-05,
|
41 |
+
"norm_num_groups": 32,
|
42 |
+
"num_class_embeds": null,
|
43 |
+
"only_cross_attention": false,
|
44 |
+
"projection_class_embeddings_input_dim": null,
|
45 |
+
"resnet_time_scale_shift": "default",
|
46 |
+
"sample_size": 64,
|
47 |
+
"time_cond_proj_dim": null,
|
48 |
+
"time_embedding_type": "positional",
|
49 |
+
"timestep_post_act": null,
|
50 |
+
"upcast_attention": false,
|
51 |
+
"use_linear_projection": false
|
52 |
+
}
|
anime_protogen/diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ded318fedc1fc6aab56ffe1f3574257541b013ffcd2b9d62918481998002c298
|
3 |
+
size 722673357
|
anime_realdosmix/config.json
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"_name_or_path": "models/realdosmix__animestyler/checkpoint-70000",
|
5 |
+
"act_fn": "silu",
|
6 |
+
"attention_head_dim": 8,
|
7 |
+
"block_out_channels": [
|
8 |
+
320,
|
9 |
+
640,
|
10 |
+
1280,
|
11 |
+
1280
|
12 |
+
],
|
13 |
+
"center_input_sample": false,
|
14 |
+
"class_embed_type": null,
|
15 |
+
"controlnet_conditioning_channels": 0,
|
16 |
+
"controlnet_conditioning_embedding_type": "null",
|
17 |
+
"conv_in_kernel": 3,
|
18 |
+
"cross_attention_dim": 768,
|
19 |
+
"down_block_types": [
|
20 |
+
"CrossAttnDownBlock2D",
|
21 |
+
"CrossAttnDownBlock2D",
|
22 |
+
"CrossAttnDownBlock2D",
|
23 |
+
"DownBlock2D"
|
24 |
+
],
|
25 |
+
"downsample_padding": 1,
|
26 |
+
"dual_cross_attention": false,
|
27 |
+
"flip_sin_to_cos": true,
|
28 |
+
"freq_shift": 0,
|
29 |
+
"in_channels": 4,
|
30 |
+
"layers_per_block": 2,
|
31 |
+
"mid_block_scale_factor": 1,
|
32 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
33 |
+
"norm_eps": 1e-05,
|
34 |
+
"norm_num_groups": 32,
|
35 |
+
"num_class_embeds": null,
|
36 |
+
"only_cross_attention": false,
|
37 |
+
"projection_class_embeddings_input_dim": null,
|
38 |
+
"resnet_time_scale_shift": "default",
|
39 |
+
"sample_size": 64,
|
40 |
+
"time_cond_proj_dim": null,
|
41 |
+
"time_embedding_type": "positional",
|
42 |
+
"timestep_post_act": null,
|
43 |
+
"upcast_attention": false,
|
44 |
+
"use_linear_projection": false
|
45 |
+
}
|
anime_realdosmix/diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6ad6185126aa57a7ca86366c04eab912a040ea845326e769a8c84541f82d8d8a
|
3 |
+
size 1440809480
|
anime_vinteprotomix/config.json
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.14.0",
|
4 |
+
"_name_or_path": "lint/simpathizer",
|
5 |
+
"act_fn": "silu",
|
6 |
+
"attention_head_dim": 8,
|
7 |
+
"block_out_channels": [
|
8 |
+
320,
|
9 |
+
640,
|
10 |
+
1280,
|
11 |
+
1280
|
12 |
+
],
|
13 |
+
"center_input_sample": false,
|
14 |
+
"class_embed_type": null,
|
15 |
+
"controlnet_conditioning_channels": 3,
|
16 |
+
"controlnet_conditioning_embedding_type": "null",
|
17 |
+
"conv_in_kernel": 3,
|
18 |
+
"cross_attention_dim": 768,
|
19 |
+
"down_block_types": [
|
20 |
+
"CrossAttnDownBlock2D",
|
21 |
+
"CrossAttnDownBlock2D",
|
22 |
+
"CrossAttnDownBlock2D",
|
23 |
+
"DownBlock2D"
|
24 |
+
],
|
25 |
+
"downsample_padding": 1,
|
26 |
+
"dual_cross_attention": false,
|
27 |
+
"flip_sin_to_cos": true,
|
28 |
+
"freq_shift": 0,
|
29 |
+
"in_channels": 4,
|
30 |
+
"layers_per_block": 2,
|
31 |
+
"mid_block_scale_factor": 1,
|
32 |
+
"mid_block_type": "UNetMidBlock2DCrossAttn",
|
33 |
+
"norm_eps": 1e-05,
|
34 |
+
"norm_num_groups": 32,
|
35 |
+
"num_class_embeds": null,
|
36 |
+
"only_cross_attention": false,
|
37 |
+
"projection_class_embeddings_input_dim": null,
|
38 |
+
"resnet_time_scale_shift": "default",
|
39 |
+
"sample_size": 64,
|
40 |
+
"time_cond_proj_dim": null,
|
41 |
+
"time_embedding_type": "positional",
|
42 |
+
"timestep_post_act": null,
|
43 |
+
"upcast_attention": false,
|
44 |
+
"use_linear_projection": false
|
45 |
+
}
|
anime_vinteprotomix/diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8e00b52dc99375b4d6f3ac592584569ffe7eee90c55a85aa292ed98116473b37
|
3 |
+
size 720423962
|
assets/black.png
ADDED
assets/hint.png
ADDED
assets/hint_grid.png
ADDED
assets/nohint_grid.png
ADDED
assets/zerohint_grid.png
ADDED
lineart_anime_control/config.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.15.0.dev0",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"attention_head_dim": 8,
|
6 |
+
"block_out_channels": [
|
7 |
+
320,
|
8 |
+
640,
|
9 |
+
1280,
|
10 |
+
1280
|
11 |
+
],
|
12 |
+
"class_embed_type": null,
|
13 |
+
"conditioning_embedding_out_channels": [
|
14 |
+
16,
|
15 |
+
32,
|
16 |
+
96,
|
17 |
+
256
|
18 |
+
],
|
19 |
+
"controlnet_conditioning_channel_order": "rgb",
|
20 |
+
"cross_attention_dim": 768,
|
21 |
+
"down_block_types": [
|
22 |
+
"CrossAttnDownBlock2D",
|
23 |
+
"CrossAttnDownBlock2D",
|
24 |
+
"CrossAttnDownBlock2D",
|
25 |
+
"DownBlock2D"
|
26 |
+
],
|
27 |
+
"downsample_padding": 1,
|
28 |
+
"flip_sin_to_cos": true,
|
29 |
+
"freq_shift": 0,
|
30 |
+
"in_channels": 4,
|
31 |
+
"layers_per_block": 2,
|
32 |
+
"mid_block_scale_factor": 1,
|
33 |
+
"norm_eps": 1e-05,
|
34 |
+
"norm_num_groups": 32,
|
35 |
+
"num_class_embeds": null,
|
36 |
+
"only_cross_attention": false,
|
37 |
+
"projection_class_embeddings_input_dim": null,
|
38 |
+
"resnet_time_scale_shift": "default",
|
39 |
+
"upcast_attention": false,
|
40 |
+
"use_linear_projection": false
|
41 |
+
}
|
lineart_anime_control/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0cbb004daa1a79a954eb2d268826417fbbc0327c25f1e7580244912204fef135
|
3 |
+
size 722696633
|