mberke11 commited on
Commit
d6028e3
1 Parent(s): 11b9756

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .config/.last_opt_in_prompt.yaml +1 -0
  2. .config/.last_survey_prompt.yaml +1 -0
  3. .config/.last_update_check.json +1 -0
  4. .config/active_config +1 -0
  5. .config/config_sentinel +0 -0
  6. .config/configurations/config_default +6 -0
  7. .config/default_configs.db +0 -0
  8. .config/gce +1 -0
  9. .config/logs/2024.05.17/13.36.16.038415.log +534 -0
  10. .config/logs/2024.05.17/13.36.41.578276.log +5 -0
  11. .config/logs/2024.05.17/13.36.52.953916.log +169 -0
  12. .config/logs/2024.05.17/13.37.02.659444.log +5 -0
  13. .config/logs/2024.05.17/13.37.14.268709.log +8 -0
  14. .config/logs/2024.05.17/13.37.14.902972.log +8 -0
  15. .gitattributes +5 -0
  16. Comic_Generation.ipynb +3 -0
  17. LICENSE +201 -0
  18. README.md +154 -8
  19. app.py +750 -0
  20. cog.yaml +23 -0
  21. config/models.yaml +26 -0
  22. data/photomaker-v1.bin +3 -0
  23. examples/Robert/images.jpeg +0 -0
  24. examples/lecun/yann-lecun2.png +0 -0
  25. examples/taylor/1-1.png +0 -0
  26. examples/twoperson/1.jpeg +0 -0
  27. examples/twoperson/2.png +0 -0
  28. fonts/Inkfree.ttf +0 -0
  29. gradio_app_sdxl_specific_id_low_vram.py +1345 -0
  30. images/logo.png +0 -0
  31. images/pad_images.png +0 -0
  32. oldversion/gradio_app_sdxl_specific_id_mps.py +767 -0
  33. oldversion/gradio_app_sdxl_specific_id_old_version.py +782 -0
  34. predict.py +781 -0
  35. requirements.txt +15 -0
  36. results/20240520-164843/image_0.png +3 -0
  37. results/20240520-164843/image_1.png +0 -0
  38. results/20240520-164843/image_2.png +0 -0
  39. results/20240520-164843/image_3.png +0 -0
  40. results/20240520-164843/image_4.png +0 -0
  41. results/20240520-164843/image_5.png +0 -0
  42. results_examples/image1.png +3 -0
  43. sample_data/README.md +19 -0
  44. sample_data/anscombe.json +49 -0
  45. sample_data/california_housing_test.csv +0 -0
  46. sample_data/california_housing_train.csv +0 -0
  47. sample_data/mnist_test.csv +3 -0
  48. sample_data/mnist_train_small.csv +3 -0
  49. storydiffusionpipeline.py +0 -0
  50. update.md +28 -0
.config/.last_opt_in_prompt.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ {}
.config/.last_survey_prompt.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ last_prompt_time: 1715953012.3845286
.config/.last_update_check.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"last_update_check_time": 1715953022.1708608, "last_update_check_revision": 20240510142152, "notifications": [], "last_nag_times": {}}
.config/active_config ADDED
@@ -0,0 +1 @@
 
 
1
+ default
.config/config_sentinel ADDED
File without changes
.config/configurations/config_default ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ [component_manager]
2
+ disable_update_check = true
3
+
4
+ [compute]
5
+ gce_metadata_read_timeout_sec = 0
6
+
.config/default_configs.db ADDED
Binary file (12.3 kB). View file
 
.config/gce ADDED
@@ -0,0 +1 @@
 
 
1
+ False
.config/logs/2024.05.17/13.36.16.038415.log ADDED
@@ -0,0 +1,534 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-17 13:36:28,065 DEBUG root Loaded Command Group: ['gcloud', 'components']
2
+ 2024-05-17 13:36:28,069 DEBUG root Loaded Command Group: ['gcloud', 'components', 'update']
3
+ 2024-05-17 13:36:28,072 DEBUG root Running [gcloud.components.update] with arguments: [--allow-no-backup: "True", --compile-python: "True", --quiet: "True", COMPONENT-IDS:6: "['core', 'gcloud-deps', 'bq', 'gcloud', 'gcloud-crc32c', 'gsutil']"]
4
+ 2024-05-17 13:36:28,073 INFO ___FILE_ONLY___ Beginning update. This process may take several minutes.
5
+
6
+ 2024-05-17 13:36:28,098 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
7
+ 2024-05-17 13:36:28,231 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components-2.json HTTP/1.1" 200 222652
8
+ 2024-05-17 13:36:28,253 INFO ___FILE_ONLY___
9
+
10
+ 2024-05-17 13:36:28,253 INFO ___FILE_ONLY___
11
+ Your current Google Cloud CLI version is: 476.0.0
12
+
13
+ 2024-05-17 13:36:28,253 INFO ___FILE_ONLY___ Installing components from version: 476.0.0
14
+
15
+ 2024-05-17 13:36:28,253 INFO ___FILE_ONLY___
16
+
17
+ 2024-05-17 13:36:28,254 DEBUG root Chosen display Format:table[box,title="These components will be removed."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
18
+ 2024-05-17 13:36:28,255 DEBUG root Chosen display Format:table[box,title="These components will be updated."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
19
+ 2024-05-17 13:36:28,255 DEBUG root Chosen display Format:table[box,title="These components will be installed."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
20
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___ ┌─────────────────────────────────────────────────────────────────────────────┐
21
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___
22
+
23
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___ │ These components will be installed. │
24
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___
25
+
26
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___ ├─────────────────────────────────────────────────────┬────────────┬──────────┤
27
+ 2024-05-17 13:36:28,394 INFO ___FILE_ONLY___
28
+
29
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ │ Name │ Version │ Size │
30
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___
31
+
32
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ ├─────────────────────────────────────────────────────┼────────────┼──────────┤
33
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___
34
+
35
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ │
36
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ BigQuery Command Line Tool
37
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___
38
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ │
39
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ 2.1.4
40
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___
41
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ │
42
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___ 1.7 MiB
43
+ 2024-05-17 13:36:28,395 INFO ___FILE_ONLY___
44
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
45
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___
46
+
47
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
48
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ BigQuery Command Line Tool (Platform Specific)
49
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___
50
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
51
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ 2.0.101
52
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___
53
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
54
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ < 1 MiB
55
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___
56
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
57
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___
58
+
59
+ 2024-05-17 13:36:28,396 INFO ___FILE_ONLY___ │
60
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ Bundled Python 3.11
61
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___
62
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ │
63
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ 3.11.8
64
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___
65
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ │
66
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ 75.1 MiB
67
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___
68
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ │
69
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___
70
+
71
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ │
72
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ Cloud Storage Command Line Tool
73
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___
74
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ │
75
+ 2024-05-17 13:36:28,397 INFO ___FILE_ONLY___ 5.29
76
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___
77
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ │
78
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ 11.3 MiB
79
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___
80
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ │
81
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___
82
+
83
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ │
84
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ Cloud Storage Command Line Tool (Platform Specific)
85
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___
86
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ │
87
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ 5.27
88
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___
89
+ 2024-05-17 13:36:28,398 INFO ___FILE_ONLY___ │
90
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ < 1 MiB
91
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
92
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ │
93
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
94
+
95
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ │
96
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ Google Cloud CLI Core Libraries (Platform Specific)
97
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
98
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ │
99
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ 2024.01.06
100
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
101
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ │
102
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ < 1 MiB
103
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
104
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___ │
105
+ 2024-05-17 13:36:28,399 INFO ___FILE_ONLY___
106
+
107
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ │
108
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ Google Cloud CRC32C Hash Tool
109
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___
110
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ │
111
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ 1.0.0
112
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___
113
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ │
114
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ 1.2 MiB
115
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___
116
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ │
117
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___
118
+
119
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ │
120
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___ gcloud cli dependencies
121
+ 2024-05-17 13:36:28,400 INFO ___FILE_ONLY___
122
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ │
123
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ 2021.04.16
124
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___
125
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ │
126
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ < 1 MiB
127
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___
128
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ │
129
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___
130
+
131
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___ └─────────────────────────────────────────────────────┴────────────┴──────────┘
132
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___
133
+
134
+ 2024-05-17 13:36:28,401 INFO ___FILE_ONLY___
135
+
136
+ 2024-05-17 13:36:28,406 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
137
+ 2024-05-17 13:36:28,484 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/RELEASE_NOTES HTTP/1.1" 200 1211411
138
+ 2024-05-17 13:36:28,610 INFO ___FILE_ONLY___ For the latest full release notes, please visit:
139
+ https://cloud.google.com/sdk/release_notes
140
+
141
+
142
+ 2024-05-17 13:36:28,612 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
143
+
144
+ 2024-05-17 13:36:28,612 INFO ___FILE_ONLY___ ╠═ Creating update staging area ═╣
145
+
146
+ 2024-05-17 13:36:28,612 INFO ___FILE_ONLY___ ╚
147
+ 2024-05-17 13:36:28,612 INFO ___FILE_ONLY___ ══════
148
+ 2024-05-17 13:36:28,613 INFO ___FILE_ONLY___ ══════
149
+ 2024-05-17 13:36:28,613 INFO ___FILE_ONLY___ ══════
150
+ 2024-05-17 13:36:28,884 INFO ___FILE_ONLY___ ═
151
+ 2024-05-17 13:36:28,934 INFO ___FILE_ONLY___ ═
152
+ 2024-05-17 13:36:28,974 INFO ___FILE_ONLY___ ═
153
+ 2024-05-17 13:36:29,010 INFO ___FILE_ONLY___ ═
154
+ 2024-05-17 13:36:29,050 INFO ___FILE_ONLY___ ═
155
+ 2024-05-17 13:36:29,089 INFO ___FILE_ONLY___ ═
156
+ 2024-05-17 13:36:29,133 INFO ___FILE_ONLY___ ═
157
+ 2024-05-17 13:36:29,175 INFO ___FILE_ONLY___ ═
158
+ 2024-05-17 13:36:29,221 INFO ___FILE_ONLY___ ═
159
+ 2024-05-17 13:36:29,372 INFO ___FILE_ONLY___ ═
160
+ 2024-05-17 13:36:29,459 INFO ___FILE_ONLY___ ═
161
+ 2024-05-17 13:36:29,610 INFO ___FILE_ONLY___ ═
162
+ 2024-05-17 13:36:29,776 INFO ___FILE_ONLY___ ═
163
+ 2024-05-17 13:36:29,837 INFO ___FILE_ONLY___ ═
164
+ 2024-05-17 13:36:29,913 INFO ___FILE_ONLY___ ═
165
+ 2024-05-17 13:36:29,987 INFO ___FILE_ONLY___ ═
166
+ 2024-05-17 13:36:30,049 INFO ___FILE_ONLY___ ═
167
+ 2024-05-17 13:36:30,114 INFO ___FILE_ONLY___ ═
168
+ 2024-05-17 13:36:30,175 INFO ___FILE_ONLY___ ═
169
+ 2024-05-17 13:36:30,241 INFO ___FILE_ONLY___ ═
170
+ 2024-05-17 13:36:30,310 INFO ___FILE_ONLY___ ═
171
+ 2024-05-17 13:36:30,369 INFO ___FILE_ONLY___ ═
172
+ 2024-05-17 13:36:30,445 INFO ___FILE_ONLY___ ═
173
+ 2024-05-17 13:36:30,528 INFO ___FILE_ONLY___ ═
174
+ 2024-05-17 13:36:30,612 INFO ___FILE_ONLY___ ═
175
+ 2024-05-17 13:36:30,691 INFO ___FILE_ONLY___ ═
176
+ 2024-05-17 13:36:30,755 INFO ___FILE_ONLY___ ═
177
+ 2024-05-17 13:36:30,823 INFO ___FILE_ONLY___ ═
178
+ 2024-05-17 13:36:30,887 INFO ___FILE_ONLY___ ═
179
+ 2024-05-17 13:36:30,953 INFO ___FILE_ONLY___ ═
180
+ 2024-05-17 13:36:31,011 INFO ___FILE_ONLY___ ═
181
+ 2024-05-17 13:36:31,070 INFO ___FILE_ONLY___ ═
182
+ 2024-05-17 13:36:31,125 INFO ___FILE_ONLY___ ═
183
+ 2024-05-17 13:36:31,186 INFO ___FILE_ONLY___ ═
184
+ 2024-05-17 13:36:31,262 INFO ___FILE_ONLY___ ═
185
+ 2024-05-17 13:36:31,313 INFO ___FILE_ONLY___ ═
186
+ 2024-05-17 13:36:31,381 INFO ___FILE_ONLY___ ═
187
+ 2024-05-17 13:36:31,445 INFO ___FILE_ONLY___ ═
188
+ 2024-05-17 13:36:31,510 INFO ___FILE_ONLY___ ═
189
+ 2024-05-17 13:36:31,558 INFO ___FILE_ONLY___ ═
190
+ 2024-05-17 13:36:31,625 INFO ___FILE_ONLY___ ═
191
+ 2024-05-17 13:36:31,689 INFO ___FILE_ONLY___ ═
192
+ 2024-05-17 13:36:31,689 INFO ___FILE_ONLY___ ╝
193
+
194
+ 2024-05-17 13:36:31,868 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
195
+
196
+ 2024-05-17 13:36:31,868 INFO ___FILE_ONLY___ ╠═ Installing: BigQuery Command Line Tool ═╣
197
+
198
+ 2024-05-17 13:36:31,868 INFO ___FILE_ONLY___ ╚
199
+ 2024-05-17 13:36:31,873 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
200
+ 2024-05-17 13:36:31,953 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-bq-20240412130805.tar.gz HTTP/1.1" 200 1746678
201
+ 2024-05-17 13:36:32,019 INFO ___FILE_ONLY___ ═
202
+ 2024-05-17 13:36:32,020 INFO ___FILE_ONLY___ ═
203
+ 2024-05-17 13:36:32,020 INFO ___FILE_ONLY___ ═
204
+ 2024-05-17 13:36:32,020 INFO ___FILE_ONLY___ ═
205
+ 2024-05-17 13:36:32,021 INFO ___FILE_ONLY___ ═
206
+ 2024-05-17 13:36:32,021 INFO ___FILE_ONLY___ ═
207
+ 2024-05-17 13:36:32,021 INFO ___FILE_ONLY___ ═
208
+ 2024-05-17 13:36:32,021 INFO ___FILE_ONLY___ ═
209
+ 2024-05-17 13:36:32,021 INFO ___FILE_ONLY___ ═
210
+ 2024-05-17 13:36:32,022 INFO ___FILE_ONLY___ ═
211
+ 2024-05-17 13:36:32,022 INFO ___FILE_ONLY___ ═
212
+ 2024-05-17 13:36:32,022 INFO ___FILE_ONLY___ ═
213
+ 2024-05-17 13:36:32,022 INFO ___FILE_ONLY___ ═
214
+ 2024-05-17 13:36:32,023 INFO ___FILE_ONLY___ ═
215
+ 2024-05-17 13:36:32,023 INFO ___FILE_ONLY___ ═
216
+ 2024-05-17 13:36:32,023 INFO ___FILE_ONLY___ ═
217
+ 2024-05-17 13:36:32,023 INFO ___FILE_ONLY___ ═
218
+ 2024-05-17 13:36:32,024 INFO ___FILE_ONLY___ ═
219
+ 2024-05-17 13:36:32,024 INFO ___FILE_ONLY___ ═
220
+ 2024-05-17 13:36:32,024 INFO ___FILE_ONLY___ ═
221
+ 2024-05-17 13:36:32,024 INFO ___FILE_ONLY___ ═
222
+ 2024-05-17 13:36:32,025 INFO ___FILE_ONLY___ ═
223
+ 2024-05-17 13:36:32,025 INFO ___FILE_ONLY___ ═
224
+ 2024-05-17 13:36:32,025 INFO ___FILE_ONLY___ ═
225
+ 2024-05-17 13:36:32,025 INFO ___FILE_ONLY___ ═
226
+ 2024-05-17 13:36:32,026 INFO ___FILE_ONLY___ ═
227
+ 2024-05-17 13:36:32,026 INFO ___FILE_ONLY___ ═
228
+ 2024-05-17 13:36:32,026 INFO ___FILE_ONLY___ ═
229
+ 2024-05-17 13:36:32,026 INFO ___FILE_ONLY___ ═
230
+ 2024-05-17 13:36:32,027 INFO ___FILE_ONLY___ ═
231
+ 2024-05-17 13:36:32,167 INFO ___FILE_ONLY___ ═
232
+ 2024-05-17 13:36:32,173 INFO ___FILE_ONLY___ ═
233
+ 2024-05-17 13:36:32,178 INFO ___FILE_ONLY___ ═
234
+ 2024-05-17 13:36:32,183 INFO ___FILE_ONLY___ ═
235
+ 2024-05-17 13:36:32,188 INFO ___FILE_ONLY___ ═
236
+ 2024-05-17 13:36:32,192 INFO ___FILE_ONLY___ ═
237
+ 2024-05-17 13:36:32,197 INFO ___FILE_ONLY___ ═
238
+ 2024-05-17 13:36:32,202 INFO ___FILE_ONLY___ ═
239
+ 2024-05-17 13:36:32,207 INFO ___FILE_ONLY___ ═
240
+ 2024-05-17 13:36:32,212 INFO ___FILE_ONLY___ ═
241
+ 2024-05-17 13:36:32,217 INFO ___FILE_ONLY___ ═
242
+ 2024-05-17 13:36:32,221 INFO ___FILE_ONLY___ ═
243
+ 2024-05-17 13:36:32,226 INFO ___FILE_ONLY___ ═
244
+ 2024-05-17 13:36:32,232 INFO ___FILE_ONLY___ ═
245
+ 2024-05-17 13:36:32,236 INFO ___FILE_ONLY___ ═
246
+ 2024-05-17 13:36:32,241 INFO ___FILE_ONLY___ ═
247
+ 2024-05-17 13:36:32,247 INFO ___FILE_ONLY___ ═
248
+ 2024-05-17 13:36:32,251 INFO ___FILE_ONLY___ ═
249
+ 2024-05-17 13:36:32,259 INFO ___FILE_ONLY___ ═
250
+ 2024-05-17 13:36:32,263 INFO ___FILE_ONLY___ ═
251
+ 2024-05-17 13:36:32,270 INFO ___FILE_ONLY___ ═
252
+ 2024-05-17 13:36:32,275 INFO ___FILE_ONLY___ ═
253
+ 2024-05-17 13:36:32,279 INFO ___FILE_ONLY___ ═
254
+ 2024-05-17 13:36:32,283 INFO ___FILE_ONLY___ ═
255
+ 2024-05-17 13:36:32,288 INFO ___FILE_ONLY___ ═
256
+ 2024-05-17 13:36:32,292 INFO ___FILE_ONLY___ ═
257
+ 2024-05-17 13:36:32,297 INFO ___FILE_ONLY___ ═
258
+ 2024-05-17 13:36:32,300 INFO ___FILE_ONLY___ ═
259
+ 2024-05-17 13:36:32,305 INFO ___FILE_ONLY___ ═
260
+ 2024-05-17 13:36:32,310 INFO ___FILE_ONLY___ ═
261
+ 2024-05-17 13:36:32,310 INFO ___FILE_ONLY___ ╝
262
+
263
+ 2024-05-17 13:36:32,327 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
264
+
265
+ 2024-05-17 13:36:32,328 INFO ___FILE_ONLY___ ╠═ Installing: BigQuery Command Line Tool (Platform Spec... ═╣
266
+
267
+ 2024-05-17 13:36:32,328 INFO ___FILE_ONLY___ ╚
268
+ 2024-05-17 13:36:32,332 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
269
+ 2024-05-17 13:36:32,402 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-bq-nix-20240106004423.tar.gz HTTP/1.1" 200 2026
270
+ 2024-05-17 13:36:32,403 INFO ___FILE_ONLY___ ══════════════════════════════
271
+ 2024-05-17 13:36:32,404 INFO ___FILE_ONLY___ ══════════════════════════════
272
+ 2024-05-17 13:36:32,404 INFO ___FILE_ONLY___ ╝
273
+
274
+ 2024-05-17 13:36:32,415 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
275
+
276
+ 2024-05-17 13:36:32,415 INFO ___FILE_ONLY___ ╠═ Installing: Bundled Python 3.11 ═╣
277
+
278
+ 2024-05-17 13:36:32,415 INFO ___FILE_ONLY___ ╚
279
+ 2024-05-17 13:36:32,421 INFO ___FILE_ONLY___ ════════════════════════════════════════════════════════════
280
+ 2024-05-17 13:36:32,421 INFO ___FILE_ONLY___ ╝
281
+
282
+ 2024-05-17 13:36:32,423 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
283
+
284
+ 2024-05-17 13:36:32,423 INFO ___FILE_ONLY___ ╠═ Installing: Bundled Python 3.11 ═╣
285
+
286
+ 2024-05-17 13:36:32,423 INFO ___FILE_ONLY___ ╚
287
+ 2024-05-17 13:36:32,427 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
288
+ 2024-05-17 13:36:32,567 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-bundled-python3-unix-linux-x86_64-20240510142152.tar.gz HTTP/1.1" 200 78697278
289
+ 2024-05-17 13:36:33,174 INFO ___FILE_ONLY___ ═
290
+ 2024-05-17 13:36:33,178 INFO ___FILE_ONLY___ ═
291
+ 2024-05-17 13:36:33,181 INFO ___FILE_ONLY___ ═
292
+ 2024-05-17 13:36:33,185 INFO ___FILE_ONLY___ ═
293
+ 2024-05-17 13:36:33,188 INFO ___FILE_ONLY___ ═
294
+ 2024-05-17 13:36:33,191 INFO ___FILE_ONLY___ ═
295
+ 2024-05-17 13:36:33,195 INFO ___FILE_ONLY___ ═
296
+ 2024-05-17 13:36:33,198 INFO ___FILE_ONLY___ ═
297
+ 2024-05-17 13:36:33,202 INFO ___FILE_ONLY___ ═
298
+ 2024-05-17 13:36:33,205 INFO ___FILE_ONLY___ ═
299
+ 2024-05-17 13:36:33,208 INFO ___FILE_ONLY___ ═
300
+ 2024-05-17 13:36:33,212 INFO ___FILE_ONLY___ ═
301
+ 2024-05-17 13:36:33,215 INFO ___FILE_ONLY___ ═
302
+ 2024-05-17 13:36:33,219 INFO ___FILE_ONLY___ ═
303
+ 2024-05-17 13:36:33,222 INFO ___FILE_ONLY___ ═
304
+ 2024-05-17 13:36:33,226 INFO ___FILE_ONLY___ ═
305
+ 2024-05-17 13:36:33,229 INFO ___FILE_ONLY___ ═
306
+ 2024-05-17 13:36:33,232 INFO ___FILE_ONLY___ ═
307
+ 2024-05-17 13:36:33,236 INFO ___FILE_ONLY___ ═
308
+ 2024-05-17 13:36:33,239 INFO ___FILE_ONLY___ ═
309
+ 2024-05-17 13:36:33,243 INFO ___FILE_ONLY___ ═
310
+ 2024-05-17 13:36:33,246 INFO ___FILE_ONLY___ ═
311
+ 2024-05-17 13:36:33,249 INFO ___FILE_ONLY___ ═
312
+ 2024-05-17 13:36:33,253 INFO ___FILE_ONLY___ ═
313
+ 2024-05-17 13:36:33,256 INFO ___FILE_ONLY___ ═
314
+ 2024-05-17 13:36:33,260 INFO ___FILE_ONLY___ ═
315
+ 2024-05-17 13:36:33,263 INFO ___FILE_ONLY___ ═
316
+ 2024-05-17 13:36:33,267 INFO ___FILE_ONLY___ ═
317
+ 2024-05-17 13:36:33,270 INFO ___FILE_ONLY___ ═
318
+ 2024-05-17 13:36:33,274 INFO ___FILE_ONLY___ ═
319
+ 2024-05-17 13:36:35,699 INFO ___FILE_ONLY___ ═
320
+ 2024-05-17 13:36:35,727 INFO ___FILE_ONLY___ ═
321
+ 2024-05-17 13:36:35,756 INFO ___FILE_ONLY___ ═
322
+ 2024-05-17 13:36:35,784 INFO ___FILE_ONLY___ ═
323
+ 2024-05-17 13:36:35,814 INFO ___FILE_ONLY___ ═
324
+ 2024-05-17 13:36:35,843 INFO ___FILE_ONLY___ ═
325
+ 2024-05-17 13:36:35,871 INFO ___FILE_ONLY___ ═
326
+ 2024-05-17 13:36:35,899 INFO ___FILE_ONLY___ ═
327
+ 2024-05-17 13:36:35,928 INFO ___FILE_ONLY___ ═
328
+ 2024-05-17 13:36:35,956 INFO ___FILE_ONLY___ ═
329
+ 2024-05-17 13:36:35,985 INFO ___FILE_ONLY___ ═
330
+ 2024-05-17 13:36:36,014 INFO ___FILE_ONLY___ ═
331
+ 2024-05-17 13:36:36,042 INFO ___FILE_ONLY___ ═
332
+ 2024-05-17 13:36:36,072 INFO ___FILE_ONLY___ ═
333
+ 2024-05-17 13:36:36,101 INFO ___FILE_ONLY___ ═
334
+ 2024-05-17 13:36:36,129 INFO ___FILE_ONLY___ ═
335
+ 2024-05-17 13:36:36,159 INFO ___FILE_ONLY___ ═
336
+ 2024-05-17 13:36:36,575 INFO ___FILE_ONLY___ ═
337
+ 2024-05-17 13:36:36,614 INFO ___FILE_ONLY___ ═
338
+ 2024-05-17 13:36:36,667 INFO ___FILE_ONLY___ ═
339
+ 2024-05-17 13:36:36,708 INFO ___FILE_ONLY___ ═
340
+ 2024-05-17 13:36:36,868 INFO ___FILE_ONLY___ ═
341
+ 2024-05-17 13:36:37,014 INFO ___FILE_ONLY___ ═
342
+ 2024-05-17 13:36:37,055 INFO ___FILE_ONLY___ ═
343
+ 2024-05-17 13:36:37,098 INFO ___FILE_ONLY___ ═
344
+ 2024-05-17 13:36:37,172 INFO ___FILE_ONLY___ ═
345
+ 2024-05-17 13:36:37,210 INFO ___FILE_ONLY___ ═
346
+ 2024-05-17 13:36:37,257 INFO ___FILE_ONLY___ ═
347
+ 2024-05-17 13:36:38,423 INFO ___FILE_ONLY___ ═
348
+ 2024-05-17 13:36:38,456 INFO ___FILE_ONLY___ ═
349
+ 2024-05-17 13:36:38,456 INFO ___FILE_ONLY___ ╝
350
+
351
+ 2024-05-17 13:36:38,572 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
352
+
353
+ 2024-05-17 13:36:38,573 INFO ___FILE_ONLY___ ╠═ Installing: Cloud Storage Command Line Tool ═╣
354
+
355
+ 2024-05-17 13:36:38,573 INFO ___FILE_ONLY___ ╚
356
+ 2024-05-17 13:36:38,577 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
357
+ 2024-05-17 13:36:38,719 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-gsutil-20240510142152.tar.gz HTTP/1.1" 200 11893574
358
+ 2024-05-17 13:36:38,852 INFO ___FILE_ONLY___ ═
359
+ 2024-05-17 13:36:38,853 INFO ___FILE_ONLY___ ═
360
+ 2024-05-17 13:36:38,854 INFO ___FILE_ONLY___ ═
361
+ 2024-05-17 13:36:38,854 INFO ___FILE_ONLY___ ═
362
+ 2024-05-17 13:36:38,855 INFO ___FILE_ONLY___ ═
363
+ 2024-05-17 13:36:38,855 INFO ___FILE_ONLY___ ═
364
+ 2024-05-17 13:36:38,856 INFO ___FILE_ONLY___ ═
365
+ 2024-05-17 13:36:38,856 INFO ___FILE_ONLY___ ═
366
+ 2024-05-17 13:36:38,857 INFO ___FILE_ONLY___ ═
367
+ 2024-05-17 13:36:38,858 INFO ___FILE_ONLY___ ═
368
+ 2024-05-17 13:36:38,858 INFO ___FILE_ONLY___ ═
369
+ 2024-05-17 13:36:38,859 INFO ___FILE_ONLY___ ═
370
+ 2024-05-17 13:36:38,859 INFO ___FILE_ONLY___ ═
371
+ 2024-05-17 13:36:38,860 INFO ___FILE_ONLY___ ═
372
+ 2024-05-17 13:36:38,860 INFO ___FILE_ONLY___ ═
373
+ 2024-05-17 13:36:38,861 INFO ___FILE_ONLY___ ═
374
+ 2024-05-17 13:36:38,862 INFO ___FILE_ONLY___ ═
375
+ 2024-05-17 13:36:38,862 INFO ___FILE_ONLY___ ═
376
+ 2024-05-17 13:36:38,863 INFO ___FILE_ONLY___ ═
377
+ 2024-05-17 13:36:38,863 INFO ___FILE_ONLY___ ═
378
+ 2024-05-17 13:36:38,864 INFO ___FILE_ONLY___ ═
379
+ 2024-05-17 13:36:38,865 INFO ___FILE_ONLY___ ═
380
+ 2024-05-17 13:36:38,865 INFO ___FILE_ONLY___ ═
381
+ 2024-05-17 13:36:38,866 INFO ___FILE_ONLY___ ═
382
+ 2024-05-17 13:36:38,866 INFO ___FILE_ONLY___ ═
383
+ 2024-05-17 13:36:38,867 INFO ___FILE_ONLY___ ═
384
+ 2024-05-17 13:36:38,868 INFO ___FILE_ONLY___ ═
385
+ 2024-05-17 13:36:38,868 INFO ___FILE_ONLY___ ═
386
+ 2024-05-17 13:36:38,869 INFO ___FILE_ONLY___ ═
387
+ 2024-05-17 13:36:38,869 INFO ___FILE_ONLY___ ═
388
+ 2024-05-17 13:36:39,671 INFO ___FILE_ONLY___ ═
389
+ 2024-05-17 13:36:39,711 INFO ___FILE_ONLY___ ═
390
+ 2024-05-17 13:36:39,739 INFO ___FILE_ONLY___ ═
391
+ 2024-05-17 13:36:39,771 INFO ___FILE_ONLY___ ═
392
+ 2024-05-17 13:36:39,800 INFO ___FILE_ONLY___ ═
393
+ 2024-05-17 13:36:39,825 INFO ___FILE_ONLY___ ═
394
+ 2024-05-17 13:36:39,848 INFO ___FILE_ONLY___ ═
395
+ 2024-05-17 13:36:39,871 INFO ___FILE_ONLY___ ═
396
+ 2024-05-17 13:36:39,893 INFO ___FILE_ONLY___ ═
397
+ 2024-05-17 13:36:39,915 INFO ___FILE_ONLY___ ═
398
+ 2024-05-17 13:36:39,941 INFO ___FILE_ONLY___ ═
399
+ 2024-05-17 13:36:39,976 INFO ___FILE_ONLY___ ═
400
+ 2024-05-17 13:36:40,009 INFO ___FILE_ONLY___ ═
401
+ 2024-05-17 13:36:40,048 INFO ___FILE_ONLY___ ═
402
+ 2024-05-17 13:36:40,073 INFO ___FILE_ONLY___ ═
403
+ 2024-05-17 13:36:40,096 INFO ___FILE_ONLY___ ═
404
+ 2024-05-17 13:36:40,120 INFO ___FILE_ONLY___ ═
405
+ 2024-05-17 13:36:40,147 INFO ___FILE_ONLY___ ═
406
+ 2024-05-17 13:36:40,176 INFO ___FILE_ONLY___ ═
407
+ 2024-05-17 13:36:40,197 INFO ___FILE_ONLY___ ═
408
+ 2024-05-17 13:36:40,221 INFO ___FILE_ONLY___ ═
409
+ 2024-05-17 13:36:40,248 INFO ___FILE_ONLY___ ═
410
+ 2024-05-17 13:36:40,274 INFO ___FILE_ONLY___ ═
411
+ 2024-05-17 13:36:40,296 INFO ___FILE_ONLY___ ═
412
+ 2024-05-17 13:36:40,320 INFO ___FILE_ONLY___ ═
413
+ 2024-05-17 13:36:40,346 INFO ___FILE_ONLY___ ═
414
+ 2024-05-17 13:36:40,398 INFO ___FILE_ONLY___ ═
415
+ 2024-05-17 13:36:40,429 INFO ___FILE_ONLY___ ═
416
+ 2024-05-17 13:36:40,464 INFO ___FILE_ONLY___ ═
417
+ 2024-05-17 13:36:40,490 INFO ___FILE_ONLY___ ═
418
+ 2024-05-17 13:36:40,490 INFO ___FILE_ONLY___ ╝
419
+
420
+ 2024-05-17 13:36:40,572 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
421
+
422
+ 2024-05-17 13:36:40,572 INFO ___FILE_ONLY___ ╠═ Installing: Cloud Storage Command Line Tool (Platform... ═╣
423
+
424
+ 2024-05-17 13:36:40,572 INFO ___FILE_ONLY___ ╚
425
+ 2024-05-17 13:36:40,576 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
426
+ 2024-05-17 13:36:40,709 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-gsutil-nix-20240106004423.tar.gz HTTP/1.1" 200 2042
427
+ 2024-05-17 13:36:40,710 INFO ___FILE_ONLY___ ══════════════════════════════
428
+ 2024-05-17 13:36:40,711 INFO ___FILE_ONLY___ ══════════════════════════════
429
+ 2024-05-17 13:36:40,711 INFO ___FILE_ONLY___ ╝
430
+
431
+ 2024-05-17 13:36:40,721 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
432
+
433
+ 2024-05-17 13:36:40,721 INFO ___FILE_ONLY___ ╠═ Installing: Default set of gcloud commands ═╣
434
+
435
+ 2024-05-17 13:36:40,721 INFO ___FILE_ONLY___ ╚
436
+ 2024-05-17 13:36:40,727 INFO ___FILE_ONLY___ ════════════════════════════════════════════════════════════
437
+ 2024-05-17 13:36:40,727 INFO ___FILE_ONLY___ ╝
438
+
439
+ 2024-05-17 13:36:40,729 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
440
+
441
+ 2024-05-17 13:36:40,730 INFO ___FILE_ONLY___ ╠═ Installing: Google Cloud CLI Core Libraries (Platform... ═╣
442
+
443
+ 2024-05-17 13:36:40,730 INFO ___FILE_ONLY___ ╚
444
+ 2024-05-17 13:36:40,734 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
445
+ 2024-05-17 13:36:40,805 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-core-nix-20240106004423.tar.gz HTTP/1.1" 200 2410
446
+ 2024-05-17 13:36:40,805 INFO ___FILE_ONLY___ ══════════════════════════════
447
+ 2024-05-17 13:36:40,807 INFO ___FILE_ONLY___ ═══════════════
448
+ 2024-05-17 13:36:40,807 INFO ___FILE_ONLY___ ═══════════════
449
+ 2024-05-17 13:36:40,807 INFO ___FILE_ONLY___ ╝
450
+
451
+ 2024-05-17 13:36:40,817 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
452
+
453
+ 2024-05-17 13:36:40,817 INFO ___FILE_ONLY___ ╠═ Installing: Google Cloud CRC32C Hash Tool ═╣
454
+
455
+ 2024-05-17 13:36:40,817 INFO ___FILE_ONLY___ ╚
456
+ 2024-05-17 13:36:40,823 INFO ___FILE_ONLY___ ════════════════════════════════════════════════════════════
457
+ 2024-05-17 13:36:40,823 INFO ___FILE_ONLY___ ╝
458
+
459
+ 2024-05-17 13:36:40,825 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
460
+
461
+ 2024-05-17 13:36:40,825 INFO ___FILE_ONLY___ ╠═ Installing: Google Cloud CRC32C Hash Tool ═╣
462
+
463
+ 2024-05-17 13:36:40,825 INFO ___FILE_ONLY___ ╚
464
+ 2024-05-17 13:36:40,829 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
465
+ 2024-05-17 13:36:40,903 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-gcloud-crc32c-linux-x86_64-20231215195722.tar.gz HTTP/1.1" 200 1287877
466
+ 2024-05-17 13:36:40,966 INFO ___FILE_ONLY___ ═
467
+ 2024-05-17 13:36:40,966 INFO ___FILE_ONLY___ ═
468
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
469
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
470
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
471
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
472
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
473
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
474
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
475
+ 2024-05-17 13:36:40,967 INFO ___FILE_ONLY___ ═
476
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
477
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
478
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
479
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
480
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
481
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
482
+ 2024-05-17 13:36:40,968 INFO ___FILE_ONLY___ ═
483
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
484
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
485
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
486
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
487
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
488
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
489
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
490
+ 2024-05-17 13:36:40,969 INFO ___FILE_ONLY___ ═
491
+ 2024-05-17 13:36:40,970 INFO ___FILE_ONLY___ ═
492
+ 2024-05-17 13:36:40,970 INFO ___FILE_ONLY___ ═
493
+ 2024-05-17 13:36:40,970 INFO ___FILE_ONLY___ ═
494
+ 2024-05-17 13:36:40,970 INFO ___FILE_ONLY___ ═
495
+ 2024-05-17 13:36:40,970 INFO ___FILE_ONLY___ ═
496
+ 2024-05-17 13:36:41,005 INFO ___FILE_ONLY___ ═══════════════
497
+ 2024-05-17 13:36:41,006 INFO ___FILE_ONLY___ ═══════════════
498
+ 2024-05-17 13:36:41,006 INFO ___FILE_ONLY___ ╝
499
+
500
+ 2024-05-17 13:36:41,017 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
501
+
502
+ 2024-05-17 13:36:41,017 INFO ___FILE_ONLY___ ╠═ Installing: gcloud cli dependencies ═╣
503
+
504
+ 2024-05-17 13:36:41,017 INFO ___FILE_ONLY___ ╚
505
+ 2024-05-17 13:36:41,021 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
506
+ 2024-05-17 13:36:41,094 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-gcloud-deps-linux-x86_64-20210416153011.tar.gz HTTP/1.1" 200 104
507
+ 2024-05-17 13:36:41,094 INFO ___FILE_ONLY___ ══════════════════════════════
508
+ 2024-05-17 13:36:41,095 INFO ___FILE_ONLY___ ══════════════════════════════
509
+ 2024-05-17 13:36:41,095 INFO ___FILE_ONLY___ ╝
510
+
511
+ 2024-05-17 13:36:41,104 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
512
+
513
+ 2024-05-17 13:36:41,105 INFO ___FILE_ONLY___ ╠═ Creating backup and activating new installation ═╣
514
+
515
+ 2024-05-17 13:36:41,105 INFO ___FILE_ONLY___ ╚
516
+ 2024-05-17 13:36:41,105 DEBUG root Attempting to move directory [/tools/google-cloud-sdk] to [/tools/google-cloud-sdk.staging/.install/.backup]
517
+ 2024-05-17 13:36:41,105 INFO ___FILE_ONLY___ ══════════════════════════════
518
+ 2024-05-17 13:36:41,105 DEBUG root Attempting to move directory [/tools/google-cloud-sdk.staging] to [/tools/google-cloud-sdk]
519
+ 2024-05-17 13:36:41,105 INFO ___FILE_ONLY___ ══════════════════════════════
520
+ 2024-05-17 13:36:41,105 INFO ___FILE_ONLY___ ╝
521
+
522
+ 2024-05-17 13:36:41,109 DEBUG root Updating notification cache...
523
+ 2024-05-17 13:36:41,110 INFO ___FILE_ONLY___
524
+
525
+ 2024-05-17 13:36:41,112 INFO ___FILE_ONLY___ Performing post processing steps...
526
+ 2024-05-17 13:36:41,113 DEBUG root Executing command: ['/tools/google-cloud-sdk/bin/gcloud', 'components', 'post-process']
527
+ 2024-05-17 13:36:52,272 DEBUG ___FILE_ONLY___
528
+ 2024-05-17 13:36:52,272 DEBUG ___FILE_ONLY___
529
+ 2024-05-17 13:36:52,379 INFO ___FILE_ONLY___
530
+ Update done!
531
+
532
+
533
+ 2024-05-17 13:36:52,383 DEBUG root Chosen display Format:none
534
+ 2024-05-17 13:36:52,383 INFO root Display format: "none"
.config/logs/2024.05.17/13.36.41.578276.log ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 2024-05-17 13:36:41,579 DEBUG root Loaded Command Group: ['gcloud', 'components']
2
+ 2024-05-17 13:36:41,581 DEBUG root Loaded Command Group: ['gcloud', 'components', 'post_process']
3
+ 2024-05-17 13:36:41,584 DEBUG root Running [gcloud.components.post-process] with arguments: []
4
+ 2024-05-17 13:36:52,181 DEBUG root Chosen display Format:none
5
+ 2024-05-17 13:36:52,182 INFO root Display format: "none"
.config/logs/2024.05.17/13.36.52.953916.log ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-17 13:36:52,955 DEBUG root Loaded Command Group: ['gcloud', 'components']
2
+ 2024-05-17 13:36:52,957 DEBUG root Loaded Command Group: ['gcloud', 'components', 'update']
3
+ 2024-05-17 13:36:52,960 DEBUG root Running [gcloud.components.update] with arguments: [--quiet: "True", COMPONENT-IDS:8: "['gcloud', 'core', 'bq', 'gsutil', 'compute', 'preview', 'alpha', 'beta']"]
4
+ 2024-05-17 13:36:52,962 INFO ___FILE_ONLY___ Beginning update. This process may take several minutes.
5
+
6
+ 2024-05-17 13:36:52,970 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
7
+ 2024-05-17 13:36:53,045 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components-2.json HTTP/1.1" 200 222652
8
+ 2024-05-17 13:36:53,065 WARNING root Component [preview] no longer exists.
9
+ 2024-05-17 13:36:53,066 WARNING root Component [compute] no longer exists.
10
+ 2024-05-17 13:36:53,067 INFO ___FILE_ONLY___
11
+
12
+ 2024-05-17 13:36:53,067 INFO ___FILE_ONLY___
13
+ Your current Google Cloud CLI version is: 476.0.0
14
+
15
+ 2024-05-17 13:36:53,068 INFO ___FILE_ONLY___ Installing components from version: 476.0.0
16
+
17
+ 2024-05-17 13:36:53,068 INFO ___FILE_ONLY___
18
+
19
+ 2024-05-17 13:36:53,068 DEBUG root Chosen display Format:table[box,title="These components will be removed."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
20
+ 2024-05-17 13:36:53,069 DEBUG root Chosen display Format:table[box,title="These components will be updated."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
21
+ 2024-05-17 13:36:53,070 DEBUG root Chosen display Format:table[box,title="These components will be installed."](details.display_name:label=Name:align=left,version.version_string:label=Version:align=right,data.size.size(zero="",min=1048576):label=Size:align=right)
22
+ 2024-05-17 13:36:53,111 INFO ___FILE_ONLY___ ┌──────────────────────────────────────────────┐
23
+ 2024-05-17 13:36:53,111 INFO ___FILE_ONLY___
24
+
25
+ 2024-05-17 13:36:53,111 INFO ___FILE_ONLY___ │ These components will be installed. │
26
+ 2024-05-17 13:36:53,111 INFO ___FILE_ONLY___
27
+
28
+ 2024-05-17 13:36:53,111 INFO ___FILE_ONLY___ ├───────────────────────┬────────────┬─────────┤
29
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___
30
+
31
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ │ Name │ Version │ Size │
32
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___
33
+
34
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ ├───────────────────────┼────────────┼─────────┤
35
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___
36
+
37
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ │
38
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ gcloud Alpha Commands
39
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___
40
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ │
41
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ 2024.05.10
42
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___
43
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ │
44
+ 2024-05-17 13:36:53,112 INFO ___FILE_ONLY___ < 1 MiB
45
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
46
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ │
47
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
48
+
49
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ │
50
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ gcloud Beta Commands
51
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
52
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ │
53
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ 2024.05.10
54
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
55
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ │
56
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ < 1 MiB
57
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
58
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ │
59
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___
60
+
61
+ 2024-05-17 13:36:53,113 INFO ___FILE_ONLY___ └───────────────────────┴────────────┴─────────┘
62
+ 2024-05-17 13:36:53,114 INFO ___FILE_ONLY___
63
+
64
+ 2024-05-17 13:36:53,114 INFO ___FILE_ONLY___
65
+
66
+ 2024-05-17 13:36:53,118 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
67
+ 2024-05-17 13:36:53,277 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/RELEASE_NOTES HTTP/1.1" 200 1211411
68
+ 2024-05-17 13:36:53,402 INFO ___FILE_ONLY___ For the latest full release notes, please visit:
69
+ https://cloud.google.com/sdk/release_notes
70
+
71
+
72
+ 2024-05-17 13:36:53,405 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
73
+
74
+ 2024-05-17 13:36:53,405 INFO ___FILE_ONLY___ ╠═ Creating update staging area ═╣
75
+
76
+ 2024-05-17 13:36:53,405 INFO ___FILE_ONLY___ ╚
77
+ 2024-05-17 13:36:53,406 INFO ___FILE_ONLY___ ══════
78
+ 2024-05-17 13:36:54,181 INFO ___FILE_ONLY___ ══════
79
+ 2024-05-17 13:36:54,182 INFO ___FILE_ONLY___ ══════
80
+ 2024-05-17 13:36:54,618 INFO ___FILE_ONLY___ ═
81
+ 2024-05-17 13:36:54,677 INFO ___FILE_ONLY___ ═
82
+ 2024-05-17 13:36:54,724 INFO ___FILE_ONLY___ ═
83
+ 2024-05-17 13:36:54,768 INFO ___FILE_ONLY___ ═
84
+ 2024-05-17 13:36:54,813 INFO ___FILE_ONLY___ ═
85
+ 2024-05-17 13:36:54,864 INFO ___FILE_ONLY___ ═
86
+ 2024-05-17 13:36:54,911 INFO ___FILE_ONLY___ ═
87
+ 2024-05-17 13:36:54,992 INFO ___FILE_ONLY___ ═
88
+ 2024-05-17 13:36:55,166 INFO ___FILE_ONLY___ ═
89
+ 2024-05-17 13:36:55,287 INFO ___FILE_ONLY___ ═
90
+ 2024-05-17 13:36:55,517 INFO ___FILE_ONLY___ ═
91
+ 2024-05-17 13:36:55,695 INFO ___FILE_ONLY___ ═
92
+ 2024-05-17 13:36:55,960 INFO ___FILE_ONLY___ ═
93
+ 2024-05-17 13:36:56,056 INFO ___FILE_ONLY___ ═
94
+ 2024-05-17 13:36:56,137 INFO ___FILE_ONLY___ ═
95
+ 2024-05-17 13:36:56,208 INFO ___FILE_ONLY___ ═
96
+ 2024-05-17 13:36:56,298 INFO ___FILE_ONLY___ ═
97
+ 2024-05-17 13:36:56,364 INFO ___FILE_ONLY___ ═
98
+ 2024-05-17 13:36:56,433 INFO ___FILE_ONLY___ ═
99
+ 2024-05-17 13:36:56,497 INFO ___FILE_ONLY___ ═
100
+ 2024-05-17 13:36:56,568 INFO ___FILE_ONLY___ ═
101
+ 2024-05-17 13:36:56,631 INFO ___FILE_ONLY___ ═
102
+ 2024-05-17 13:36:56,703 INFO ___FILE_ONLY___ ═
103
+ 2024-05-17 13:36:56,774 INFO ___FILE_ONLY___ ═
104
+ 2024-05-17 13:36:56,847 INFO ___FILE_ONLY___ ═
105
+ 2024-05-17 13:36:56,914 INFO ___FILE_ONLY___ ═
106
+ 2024-05-17 13:36:56,982 INFO ___FILE_ONLY___ ═
107
+ 2024-05-17 13:36:57,072 INFO ___FILE_ONLY___ ═
108
+ 2024-05-17 13:36:57,151 INFO ___FILE_ONLY___ ═
109
+ 2024-05-17 13:36:57,300 INFO ___FILE_ONLY___ ═
110
+ 2024-05-17 13:36:57,400 INFO ___FILE_ONLY___ ═
111
+ 2024-05-17 13:36:57,464 INFO ___FILE_ONLY___ ═
112
+ 2024-05-17 13:36:57,551 INFO ___FILE_ONLY___ ═
113
+ 2024-05-17 13:36:57,624 INFO ___FILE_ONLY___ ═
114
+ 2024-05-17 13:36:57,698 INFO ___FILE_ONLY___ ═
115
+ 2024-05-17 13:36:57,773 INFO ___FILE_ONLY___ ═
116
+ 2024-05-17 13:36:57,857 INFO ___FILE_ONLY___ ═
117
+ 2024-05-17 13:36:57,933 INFO ___FILE_ONLY___ ═
118
+ 2024-05-17 13:36:58,024 INFO ___FILE_ONLY___ ═
119
+ 2024-05-17 13:36:58,098 INFO ___FILE_ONLY___ ═
120
+ 2024-05-17 13:36:58,174 INFO ___FILE_ONLY___ ═
121
+ 2024-05-17 13:36:58,243 INFO ___FILE_ONLY___ ═
122
+ 2024-05-17 13:36:58,243 INFO ___FILE_ONLY___ ╝
123
+
124
+ 2024-05-17 13:37:01,898 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
125
+
126
+ 2024-05-17 13:37:01,899 INFO ___FILE_ONLY___ ╠═ Installing: gcloud Alpha Commands ═╣
127
+
128
+ 2024-05-17 13:37:01,899 INFO ___FILE_ONLY___ ╚
129
+ 2024-05-17 13:37:01,903 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
130
+ 2024-05-17 13:37:02,003 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-alpha-20240510142152.tar.gz HTTP/1.1" 200 800
131
+ 2024-05-17 13:37:02,004 INFO ___FILE_ONLY___ ══════════════════════════════
132
+ 2024-05-17 13:37:02,005 INFO ___FILE_ONLY___ ══════════════════════════════
133
+ 2024-05-17 13:37:02,005 INFO ___FILE_ONLY___ ╝
134
+
135
+ 2024-05-17 13:37:02,015 INFO ___FILE_ONLY___ ╔════════════════════════════════════════════════════════════╗
136
+
137
+ 2024-05-17 13:37:02,015 INFO ___FILE_ONLY___ ╠═ Installing: gcloud Beta Commands ═╣
138
+
139
+ 2024-05-17 13:37:02,015 INFO ___FILE_ONLY___ ╚
140
+ 2024-05-17 13:37:02,019 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): dl.google.com:443
141
+ 2024-05-17 13:37:02,152 DEBUG urllib3.connectionpool https://dl.google.com:443 "GET /dl/cloudsdk/channels/rapid/components/google-cloud-sdk-beta-20240510142152.tar.gz HTTP/1.1" 200 797
142
+ 2024-05-17 13:37:02,153 INFO ___FILE_ONLY___ ══════════════════════════════
143
+ 2024-05-17 13:37:02,154 INFO ___FILE_ONLY___ ══════════════════════════════
144
+ 2024-05-17 13:37:02,154 INFO ___FILE_ONLY___ ╝
145
+
146
+ 2024-05-17 13:37:02,165 INFO ___FILE_ONLY___ ��════════════════════════════════════════════════════════════╗
147
+
148
+ 2024-05-17 13:37:02,165 INFO ___FILE_ONLY___ ╠═ Creating backup and activating new installation ═╣
149
+
150
+ 2024-05-17 13:37:02,165 INFO ___FILE_ONLY___ ╚
151
+ 2024-05-17 13:37:02,165 DEBUG root Attempting to move directory [/tools/google-cloud-sdk] to [/tools/google-cloud-sdk.staging/.install/.backup]
152
+ 2024-05-17 13:37:02,165 INFO ___FILE_ONLY___ ══════════════════════════════
153
+ 2024-05-17 13:37:02,166 DEBUG root Attempting to move directory [/tools/google-cloud-sdk.staging] to [/tools/google-cloud-sdk]
154
+ 2024-05-17 13:37:02,166 INFO ___FILE_ONLY___ ══════════════════════════════
155
+ 2024-05-17 13:37:02,166 INFO ___FILE_ONLY___ ╝
156
+
157
+ 2024-05-17 13:37:02,170 DEBUG root Updating notification cache...
158
+ 2024-05-17 13:37:02,171 INFO ___FILE_ONLY___
159
+
160
+ 2024-05-17 13:37:02,173 INFO ___FILE_ONLY___ Performing post processing steps...
161
+ 2024-05-17 13:37:02,173 DEBUG root Executing command: ['/tools/google-cloud-sdk/bin/gcloud', 'components', 'post-process']
162
+ 2024-05-17 13:37:13,462 DEBUG ___FILE_ONLY___
163
+ 2024-05-17 13:37:13,463 DEBUG ___FILE_ONLY___
164
+ 2024-05-17 13:37:13,691 INFO ___FILE_ONLY___
165
+ Update done!
166
+
167
+
168
+ 2024-05-17 13:37:13,694 DEBUG root Chosen display Format:none
169
+ 2024-05-17 13:37:13,695 INFO root Display format: "none"
.config/logs/2024.05.17/13.37.02.659444.log ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ 2024-05-17 13:37:02,660 DEBUG root Loaded Command Group: ['gcloud', 'components']
2
+ 2024-05-17 13:37:02,662 DEBUG root Loaded Command Group: ['gcloud', 'components', 'post_process']
3
+ 2024-05-17 13:37:02,665 DEBUG root Running [gcloud.components.post-process] with arguments: []
4
+ 2024-05-17 13:37:13,365 DEBUG root Chosen display Format:none
5
+ 2024-05-17 13:37:13,366 INFO root Display format: "none"
.config/logs/2024.05.17/13.37.14.268709.log ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ 2024-05-17 13:37:14,271 DEBUG root Loaded Command Group: ['gcloud', 'config']
2
+ 2024-05-17 13:37:14,326 DEBUG root Loaded Command Group: ['gcloud', 'config', 'set']
3
+ 2024-05-17 13:37:14,329 DEBUG root Running [gcloud.config.set] with arguments: [SECTION/PROPERTY: "component_manager/disable_update_check", VALUE: "true"]
4
+ 2024-05-17 13:37:14,330 INFO ___FILE_ONLY___ Updated property [component_manager/disable_update_check].
5
+
6
+ 2024-05-17 13:37:14,331 DEBUG root Chosen display Format:default
7
+ 2024-05-17 13:37:14,332 INFO root Display format: "default"
8
+ 2024-05-17 13:37:14,332 DEBUG root SDK update checks are disabled.
.config/logs/2024.05.17/13.37.14.902972.log ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ 2024-05-17 13:37:14,905 DEBUG root Loaded Command Group: ['gcloud', 'config']
2
+ 2024-05-17 13:37:14,959 DEBUG root Loaded Command Group: ['gcloud', 'config', 'set']
3
+ 2024-05-17 13:37:14,962 DEBUG root Running [gcloud.config.set] with arguments: [SECTION/PROPERTY: "compute/gce_metadata_read_timeout_sec", VALUE: "0"]
4
+ 2024-05-17 13:37:14,963 INFO ___FILE_ONLY___ Updated property [compute/gce_metadata_read_timeout_sec].
5
+
6
+ 2024-05-17 13:37:14,964 DEBUG root Chosen display Format:default
7
+ 2024-05-17 13:37:14,964 INFO root Display format: "default"
8
+ 2024-05-17 13:37:14,965 DEBUG root SDK update checks are disabled.
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Comic_Generation.ipynb filter=lfs diff=lfs merge=lfs -text
37
+ results/20240520-164843/image_0.png filter=lfs diff=lfs merge=lfs -text
38
+ results_examples/image1.png filter=lfs diff=lfs merge=lfs -text
39
+ sample_data/mnist_test.csv filter=lfs diff=lfs merge=lfs -text
40
+ sample_data/mnist_train_small.csv filter=lfs diff=lfs merge=lfs -text
Comic_Generation.ipynb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:805ef26cdefe0c1b1256c350016dadd6f9225ccdc09ac957e4aa66f9e811ed9d
3
+ size 19370926
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,12 +1,158 @@
1
  ---
2
- title: Story
3
- emoji: 🏃
4
- colorFrom: purple
5
- colorTo: yellow
6
  sdk: gradio
7
- sdk_version: 4.31.4
8
- app_file: app.py
9
- pinned: false
10
  ---
 
 
 
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: story
3
+ app_file: gradio_app_sdxl_specific_id_low_vram.py
 
 
4
  sdk: gradio
5
+ sdk_version: 4.22.0
 
 
6
  ---
7
+ <p align="center">
8
+ <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/f79da6b7-0b3b-4dd7-8dd0-ba0b15306fe6" height=100>
9
+ </p>
10
 
11
+ <div align="center">
12
+
13
+ ## StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md-dark.svg)]()
14
+
15
+ [[Paper](https://arxiv.org/abs/2405.01434)] &emsp; [[Project Page](https://storydiffusion.github.io/)] &emsp; [[🤗 Comic Generation Demo ](https://huggingface.co/spaces/YupengZhou/StoryDiffusion)] [![Replicate](https://replicate.com/cjwbw/StoryDiffusion/badge)](https://replicate.com/cjwbw/StoryDiffusion) [![Run Comics Demo in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/HVision-NKU/StoryDiffusion/blob/main/Comic_Generation.ipynb) <br>
16
+ </div>
17
+
18
+
19
+ ---
20
+
21
+ Official implementation of **[StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation]()**.
22
+
23
+ ### **Demo Video**
24
+
25
+ https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/d5b80f8f-09b0-48cd-8b10-daff46d422af
26
+
27
+
28
+ ### Update History
29
+
30
+ ***You can visit [here](update.md) to visit update history.***
31
+
32
+ ### 🌠 **Key Features:**
33
+ StoryDiffusion can create a magic story by generating consistent images and videos. Our work mainly has two parts:
34
+ 1. Consistent self-attention for character-consistent image generation over long-range sequences. It is hot-pluggable and compatible with all SD1.5 and SDXL-based image diffusion models. For the current implementation, the user needs to provide at least 3 text prompts for the consistent self-attention module. We recommend at least 5 - 6 text prompts for better layout arrangement.
35
+ 2. Motion predictor for long-range video generation, which predicts motion between Condition Images in a compressed image semantic space, achieving larger motion prediction.
36
+
37
+
38
+
39
+ ## 🔥 **Examples**
40
+
41
+
42
+ ### Comics generation
43
+
44
+
45
+ ![1](https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/b3771cbc-b6ca-4e26-bdc5-d944daf9f266)
46
+
47
+
48
+
49
+ ### Image-to-Video generation (Results are HIGHLY compressed for speed)
50
+ Leveraging the images produced through our Consistent Self-Attention mechanism, we can extend the process to create videos by seamlessly transitioning between these images. This can be considered as a two-stage long video generation approach.
51
+
52
+ Note: results are **highly compressed** for speed, you can visit [our website](https://storydiffusion.github.io/) for the high-quality version.
53
+ #### Two-stage Long Videos Generation (New Update)
54
+ Combining the two parts, we can generate very long and high-quality AIGC videos.
55
+ | Video1 | Video2 | Video3 |
56
+ | --- | --- | --- |
57
+ | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/4e7e0f24-5f90-419b-9a1e-cdf36d361b26" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/f509343d-d691-4e2a-b615-7d96381ef7c1" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/4f0f7abb-4ae4-47a6-b692-5bdd8d9c8006" width=224> |
58
+
59
+
60
+ #### Long Video Results using Condition Images
61
+ Our Image-to-Video model can generate a video by providing a sequence of user-input condition images.
62
+ | Video1 | Video2 | Video3 |
63
+ | --- | --- | --- |
64
+ | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/af6f5c50-c773-4ef2-a757-6d7a46393f39" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/d58e4037-d8df-4f90-8c81-ce4b6d2d868e" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/40da15ba-f5c1-48d8-84d6-8d327207d696" width=224> |
65
+
66
+ | Video4 | Video5 | Video6 |
67
+ | --- | --- | --- |
68
+ | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/8f04c9fc-3031-49e3-9de8-83d582b80a1f" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/604107fb-8afe-4052-bda4-362c646a756e" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/b05fa6a0-12e6-4111-abf8-18b8cd84f3ff" width=224> |
69
+
70
+
71
+
72
+
73
+ #### Short Videos
74
+
75
+ | Video1 | Video2 | Video3 |
76
+ | --- | --- | --- |
77
+ | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/5e7f717f-daad-46f6-b3ba-c087bd843158" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/79aa52b2-bf37-4c9c-8555-c7050aec0cdf" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/9fdfd091-10e6-434e-9ce7-6d6e6d8f4b22" width=224> |
78
+
79
+
80
+
81
+ | Video4 | Video5 | Video6 |
82
+ | --- | --- | --- |
83
+ | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/0b219b60-a998-4820-9657-6abe1747cb6b" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/d387aef0-ffc8-41b0-914f-4b0392d9f8c5" width=224> | <img src="https://github.com/HVision-NKU/StoryDiffusion/assets/49511209/3c64958a-1079-4ca0-a9cf-e0486adbc57f" width=224> |
84
+
85
+
86
+
87
+
88
+ ## 🚩 **TODO/Updates**
89
+ - [x] Comic Results of StoryDiffusion.
90
+ - [x] Video Results of StoryDiffusion.
91
+ - [x] Source code of Comic Generation
92
+ - [x] Source code of gradio demo
93
+ - [ ] Source code of Video Generation Model
94
+ - [ ] Pretrained weight of Video Generation Model
95
+ ---
96
+
97
+ # 🔧 Dependencies and Installation
98
+
99
+ - Python >= 3.8 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
100
+ - [PyTorch >= 2.0.0](https://pytorch.org/)
101
+ ```bash
102
+ conda create --name storydiffusion python=3.10
103
+ conda activate storydiffusion
104
+ pip install -U pip
105
+
106
+ # Install requirements
107
+ pip install -r requirements.txt
108
+ ```
109
+ # How to use
110
+
111
+ Currently, we provide two ways for you to generate comics.
112
+
113
+ ## Use the jupyter notebook
114
+
115
+ You can open the `Comic_Generation.ipynb` and run the code.
116
+
117
+ ## Start a local gradio demo
118
+ Run the following command:
119
+
120
+
121
+ **(Recommend)** We provide a low GPU Memory cost version, it was tested on a machine with 24GB GPU-memory(Tesla A10) and 30GB RAM, and expected to work well with >20 G GPU-memory.
122
+
123
+ ```python
124
+ python gradio_app_sdxl_specific_id_low_vram.py
125
+ ```
126
+
127
+
128
+ ## Contact
129
+ If you have any questions, you are very welcome to email [email protected] and [email protected]
130
+
131
+
132
+
133
+
134
+ # Disclaimer
135
+ This project strives to impact the domain of AI-driven image and video generation positively. Users are granted the freedom to create images and videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
136
+
137
+ # Related Resources
138
+ Following are some third-party implementations of StoryDiffusion.
139
+
140
+
141
+ ## API
142
+
143
+ - [runpod.io serverless worker](https://github.com/bes-dev/story-diffusion-runpod-serverless-worker) provided by [BeS](https://github.com/bes-dev).
144
+ - [Replicate worker](https://github.com/camenduru/StoryDiffusion-replicate) provided by [camenduru](https://github.com/camenduru).
145
+
146
+
147
+
148
+
149
+ # BibTeX
150
+ If you find StoryDiffusion useful for your research and applications, please cite using this BibTeX:
151
+
152
+ ```BibTeX
153
+ @article{zhou2024storydiffusion,
154
+ title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
155
+ author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
156
+ journal={arXiv preprint arXiv:2405.01434},
157
+ year={2024}
158
+ }
app.py ADDED
@@ -0,0 +1,750 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from email.policy import default
2
+ import gradio as gr
3
+ import numpy as np
4
+ import spaces
5
+ import torch
6
+ import requests
7
+ import random
8
+ import os
9
+ import sys
10
+ import pickle
11
+ from PIL import Image
12
+ from tqdm.auto import tqdm
13
+ from datetime import datetime
14
+ from utils.gradio_utils import is_torch2_available
15
+ if is_torch2_available():
16
+ from utils.gradio_utils import \
17
+ AttnProcessor2_0 as AttnProcessor
18
+ # from utils.gradio_utils import SpatialAttnProcessor2_0
19
+ else:
20
+ from utils.gradio_utils import AttnProcessor
21
+
22
+ import diffusers
23
+ from diffusers import StableDiffusionXLPipeline
24
+ from utils import PhotoMakerStableDiffusionXLPipeline
25
+ from diffusers import DDIMScheduler
26
+ import torch.nn.functional as F
27
+ from utils.gradio_utils import cal_attn_mask_xl
28
+ import copy
29
+ import os
30
+ from huggingface_hub import hf_hub_download
31
+ from diffusers.utils import load_image
32
+ from utils.utils import get_comic
33
+ from utils.style_template import styles
34
+ image_encoder_path = "./data/models/ip_adapter/sdxl_models/image_encoder"
35
+ ip_ckpt = "./data/models/ip_adapter/sdxl_models/ip-adapter_sdxl_vit-h.bin"
36
+ os.environ["no_proxy"] = "localhost,127.0.0.1,::1"
37
+ STYLE_NAMES = list(styles.keys())
38
+ DEFAULT_STYLE_NAME = "Japanese Anime"
39
+ global models_dict
40
+ use_va = True
41
+ models_dict = {
42
+ # "Juggernaut": "RunDiffusion/Juggernaut-XL-v8",
43
+ # "RealVision": "SG161222/RealVisXL_V4.0" ,
44
+ # "SDXL":"stabilityai/stable-diffusion-xl-base-1.0" ,
45
+ "Unstable": "stablediffusionapi/sdxl-unstable-diffusers-y"
46
+ }
47
+ photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
48
+ MAX_SEED = np.iinfo(np.int32).max
49
+ def setup_seed(seed):
50
+ torch.manual_seed(seed)
51
+ torch.cuda.manual_seed_all(seed)
52
+ np.random.seed(seed)
53
+ random.seed(seed)
54
+ torch.backends.cudnn.deterministic = True
55
+ def set_text_unfinished():
56
+ return gr.update(visible=True, value="<h3>(Not Finished) Generating ··· The intermediate results will be shown.</h3>")
57
+ def set_text_finished():
58
+ return gr.update(visible=True, value="<h3>Generation Finished</h3>")
59
+ #################################################
60
+ def get_image_path_list(folder_name):
61
+ image_basename_list = os.listdir(folder_name)
62
+ image_path_list = sorted([os.path.join(folder_name, basename) for basename in image_basename_list])
63
+ return image_path_list
64
+
65
+ #################################################
66
+ class SpatialAttnProcessor2_0(torch.nn.Module):
67
+ r"""
68
+ Attention processor for IP-Adapater for PyTorch 2.0.
69
+ Args:
70
+ hidden_size (`int`):
71
+ The hidden size of the attention layer.
72
+ cross_attention_dim (`int`):
73
+ The number of channels in the `encoder_hidden_states`.
74
+ text_context_len (`int`, defaults to 77):
75
+ The context length of the text features.
76
+ scale (`float`, defaults to 1.0):
77
+ the weight scale of image prompt.
78
+ """
79
+
80
+ def __init__(self, hidden_size = None, cross_attention_dim=None,id_length = 4,device = "cuda",dtype = torch.float16):
81
+ super().__init__()
82
+ if not hasattr(F, "scaled_dot_product_attention"):
83
+ raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
84
+ self.device = device
85
+ self.dtype = dtype
86
+ self.hidden_size = hidden_size
87
+ self.cross_attention_dim = cross_attention_dim
88
+ self.total_length = id_length + 1
89
+ self.id_length = id_length
90
+ self.id_bank = {}
91
+
92
+ def __call__(
93
+ self,
94
+ attn,
95
+ hidden_states,
96
+ encoder_hidden_states=None,
97
+ attention_mask=None,
98
+ temb=None):
99
+ # un_cond_hidden_states, cond_hidden_states = hidden_states.chunk(2)
100
+ # un_cond_hidden_states = self.__call2__(attn, un_cond_hidden_states,encoder_hidden_states,attention_mask,temb)
101
+ # 生成一个0到1之间的随机数
102
+ global total_count,attn_count,cur_step,mask1024,mask4096
103
+ global sa32, sa64
104
+ global write
105
+ global height,width
106
+ if write:
107
+ # print(f"white:{cur_step}")
108
+ self.id_bank[cur_step] = [hidden_states[:self.id_length], hidden_states[self.id_length:]]
109
+ else:
110
+ encoder_hidden_states = torch.cat((self.id_bank[cur_step][0].to(self.device),hidden_states[:1],self.id_bank[cur_step][1].to(self.device),hidden_states[1:]))
111
+ # 判断随机数是否大于0.5
112
+ if cur_step <5:
113
+ hidden_states = self.__call2__(attn, hidden_states,encoder_hidden_states,attention_mask,temb)
114
+ else: # 256 1024 4096
115
+ random_number = random.random()
116
+ if cur_step <20:
117
+ rand_num = 0.3
118
+ else:
119
+ rand_num = 0.1
120
+ # print(f"hidden state shape {hidden_states.shape[1]}")
121
+ if random_number > rand_num:
122
+ # print("mask shape",mask1024.shape,mask4096.shape)
123
+ if not write:
124
+ if hidden_states.shape[1] == (height//32) * (width//32):
125
+ attention_mask = mask1024[mask1024.shape[0] // self.total_length * self.id_length:]
126
+ else:
127
+ attention_mask = mask4096[mask4096.shape[0] // self.total_length * self.id_length:]
128
+ else:
129
+ # print(self.total_length,self.id_length,hidden_states.shape,(height//32) * (width//32))
130
+ if hidden_states.shape[1] == (height//32) * (width//32):
131
+ attention_mask = mask1024[:mask1024.shape[0] // self.total_length * self.id_length,:mask1024.shape[0] // self.total_length * self.id_length]
132
+ else:
133
+ attention_mask = mask4096[:mask4096.shape[0] // self.total_length * self.id_length,:mask4096.shape[0] // self.total_length * self.id_length]
134
+ # print(attention_mask.shape)
135
+ # print("before attention",hidden_states.shape,attention_mask.shape,encoder_hidden_states.shape if encoder_hidden_states is not None else "None")
136
+ hidden_states = self.__call1__(attn, hidden_states,encoder_hidden_states,attention_mask,temb)
137
+ else:
138
+ hidden_states = self.__call2__(attn, hidden_states,None,attention_mask,temb)
139
+ attn_count +=1
140
+ if attn_count == total_count:
141
+ attn_count = 0
142
+ cur_step += 1
143
+ mask1024,mask4096 = cal_attn_mask_xl(self.total_length,self.id_length,sa32,sa64,height,width, device=self.device, dtype= self.dtype)
144
+
145
+ return hidden_states
146
+ def __call1__(
147
+ self,
148
+ attn,
149
+ hidden_states,
150
+ encoder_hidden_states=None,
151
+ attention_mask=None,
152
+ temb=None,
153
+ ):
154
+ # print("hidden state shape",hidden_states.shape,self.id_length)
155
+ residual = hidden_states
156
+ # if encoder_hidden_states is not None:
157
+ # raise Exception("not implement")
158
+ if attn.spatial_norm is not None:
159
+ hidden_states = attn.spatial_norm(hidden_states, temb)
160
+ input_ndim = hidden_states.ndim
161
+
162
+ if input_ndim == 4:
163
+ total_batch_size, channel, height, width = hidden_states.shape
164
+ hidden_states = hidden_states.view(total_batch_size, channel, height * width).transpose(1, 2)
165
+ total_batch_size,nums_token,channel = hidden_states.shape
166
+ img_nums = total_batch_size//2
167
+ hidden_states = hidden_states.view(-1,img_nums,nums_token,channel).reshape(-1,img_nums * nums_token,channel)
168
+
169
+ batch_size, sequence_length, _ = hidden_states.shape
170
+
171
+ if attn.group_norm is not None:
172
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
173
+
174
+ query = attn.to_q(hidden_states)
175
+
176
+ if encoder_hidden_states is None:
177
+ encoder_hidden_states = hidden_states # B, N, C
178
+ else:
179
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,nums_token,channel).reshape(-1,(self.id_length+1) * nums_token,channel)
180
+
181
+ key = attn.to_k(encoder_hidden_states)
182
+ value = attn.to_v(encoder_hidden_states)
183
+
184
+
185
+ inner_dim = key.shape[-1]
186
+ head_dim = inner_dim // attn.heads
187
+
188
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
189
+
190
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
191
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
192
+ # print(key.shape,value.shape,query.shape,attention_mask.shape)
193
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
194
+ # TODO: add support for attn.scale when we move to Torch 2.1
195
+ #print(query.shape,key.shape,value.shape,attention_mask.shape)
196
+ hidden_states = F.scaled_dot_product_attention(
197
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
198
+ )
199
+
200
+ hidden_states = hidden_states.transpose(1, 2).reshape(total_batch_size, -1, attn.heads * head_dim)
201
+ hidden_states = hidden_states.to(query.dtype)
202
+
203
+
204
+
205
+ # linear proj
206
+ hidden_states = attn.to_out[0](hidden_states)
207
+ # dropout
208
+ hidden_states = attn.to_out[1](hidden_states)
209
+
210
+ # if input_ndim == 4:
211
+ # tile_hidden_states = tile_hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
212
+
213
+ # if attn.residual_connection:
214
+ # tile_hidden_states = tile_hidden_states + residual
215
+
216
+ if input_ndim == 4:
217
+ hidden_states = hidden_states.transpose(-1, -2).reshape(total_batch_size, channel, height, width)
218
+ if attn.residual_connection:
219
+ hidden_states = hidden_states + residual
220
+ hidden_states = hidden_states / attn.rescale_output_factor
221
+ # print(hidden_states.shape)
222
+ return hidden_states
223
+ def __call2__(
224
+ self,
225
+ attn,
226
+ hidden_states,
227
+ encoder_hidden_states=None,
228
+ attention_mask=None,
229
+ temb=None):
230
+ residual = hidden_states
231
+
232
+ if attn.spatial_norm is not None:
233
+ hidden_states = attn.spatial_norm(hidden_states, temb)
234
+
235
+ input_ndim = hidden_states.ndim
236
+
237
+ if input_ndim == 4:
238
+ batch_size, channel, height, width = hidden_states.shape
239
+ hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
240
+
241
+ batch_size, sequence_length, channel = (
242
+ hidden_states.shape
243
+ )
244
+ # print(hidden_states.shape)
245
+ if attention_mask is not None:
246
+ attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
247
+ # scaled_dot_product_attention expects attention_mask shape to be
248
+ # (batch, heads, source_length, target_length)
249
+ attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
250
+
251
+ if attn.group_norm is not None:
252
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
253
+
254
+ query = attn.to_q(hidden_states)
255
+
256
+ if encoder_hidden_states is None:
257
+ encoder_hidden_states = hidden_states # B, N, C
258
+ else:
259
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,sequence_length,channel).reshape(-1,(self.id_length+1) * sequence_length,channel)
260
+
261
+ key = attn.to_k(encoder_hidden_states)
262
+ value = attn.to_v(encoder_hidden_states)
263
+
264
+ inner_dim = key.shape[-1]
265
+ head_dim = inner_dim // attn.heads
266
+
267
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
268
+
269
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
270
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
271
+
272
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
273
+ # TODO: add support for attn.scale when we move to Torch 2.1
274
+ hidden_states = F.scaled_dot_product_attention(
275
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
276
+ )
277
+
278
+ hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
279
+ hidden_states = hidden_states.to(query.dtype)
280
+
281
+ # linear proj
282
+ hidden_states = attn.to_out[0](hidden_states)
283
+ # dropout
284
+ hidden_states = attn.to_out[1](hidden_states)
285
+
286
+ if input_ndim == 4:
287
+ hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
288
+
289
+ if attn.residual_connection:
290
+ hidden_states = hidden_states + residual
291
+
292
+ hidden_states = hidden_states / attn.rescale_output_factor
293
+
294
+ return hidden_states
295
+
296
+ def set_attention_processor(unet,id_length,is_ipadapter = False):
297
+ global total_count
298
+ total_count = 0
299
+ attn_procs = {}
300
+ for name in unet.attn_processors.keys():
301
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
302
+ if name.startswith("mid_block"):
303
+ hidden_size = unet.config.block_out_channels[-1]
304
+ elif name.startswith("up_blocks"):
305
+ block_id = int(name[len("up_blocks.")])
306
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
307
+ elif name.startswith("down_blocks"):
308
+ block_id = int(name[len("down_blocks.")])
309
+ hidden_size = unet.config.block_out_channels[block_id]
310
+ if cross_attention_dim is None:
311
+ if name.startswith("up_blocks") :
312
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length = id_length)
313
+ total_count +=1
314
+ else:
315
+ attn_procs[name] = AttnProcessor()
316
+ else:
317
+ if is_ipadapter:
318
+ attn_procs[name] = IPAttnProcessor2_0(
319
+ hidden_size=hidden_size,
320
+ cross_attention_dim=cross_attention_dim,
321
+ scale=1,
322
+ num_tokens=4,
323
+ ).to(unet.device, dtype=torch.float16)
324
+ else:
325
+ attn_procs[name] = AttnProcessor()
326
+
327
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
328
+ print("successsfully load paired self-attention")
329
+ print(f"number of the processor : {total_count}")
330
+ #################################################
331
+ #################################################
332
+ canvas_html = "<div id='canvas-root' style='max-width:400px; margin: 0 auto'></div>"
333
+ load_js = """
334
+ async () => {
335
+ const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/sketch-canvas.js"
336
+ fetch(url)
337
+ .then(res => res.text())
338
+ .then(text => {
339
+ const script = document.createElement('script');
340
+ script.type = "module"
341
+ script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
342
+ document.head.appendChild(script);
343
+ });
344
+ }
345
+ """
346
+
347
+ get_js_colors = """
348
+ async (canvasData) => {
349
+ const canvasEl = document.getElementById("canvas-root");
350
+ return [canvasEl._data]
351
+ }
352
+ """
353
+
354
+ css = '''
355
+ #color-bg{display:flex;justify-content: center;align-items: center;}
356
+ .color-bg-item{width: 100%; height: 32px}
357
+ #main_button{width:100%}
358
+ <style>
359
+ '''
360
+
361
+
362
+ #################################################
363
+ title = r"""
364
+ <h1 align="center">StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</h1>
365
+ """
366
+
367
+ description = r"""
368
+ <b>Official 🤗 Gradio demo</b> for <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'><b>StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</b></a>.<br>
369
+ ❗️❗️❗️[<b>Important</b>] Personalization steps:<br>
370
+ 1️⃣ Enter a Textual Description for Character, if you add the Ref-Image, making sure to <b>follow the class word</b> you want to customize with the <b>trigger word</b>: `img`, such as: `man img` or `woman img` or `girl img`.<br>
371
+ 2️⃣ Enter the prompt array, each line corrsponds to one generated image.<br>
372
+ 3️⃣ Choose your preferred style template.<br>
373
+ 4️⃣ Click the <b>Submit</b> button to start customizing.
374
+ """
375
+
376
+ article = r"""
377
+
378
+ If StoryDiffusion is helpful, please help to ⭐ the <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'>Github Repo</a>. Thanks!
379
+ [![GitHub Stars](https://img.shields.io/github/stars/HVision-NKU/StoryDiffusion?style=social)](https://github.com/HVision-NKU/StoryDiffusion)
380
+ ---
381
+ 📝 **Citation**
382
+ <br>
383
+ If our work is useful for your research, please consider citing:
384
+
385
+ ```bibtex
386
+ @article{Zhou2024storydiffusion,
387
+ title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
388
+ author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
389
+ year={2024}
390
+ }
391
+ ```
392
+ 📋 **License**
393
+ <br>
394
+ The Contents you create are under Apache-2.0 LICENSE. The Code are under Attribution-NonCommercial 4.0 International.
395
+
396
+ 📧 **Contact**
397
+ <br>
398
+ If you have any questions, please feel free to reach me out at <b>[email protected]</b>.
399
+ """
400
+ version = r"""
401
+ <h3 align="center">StoryDiffusion Version 0.01 (test version)</h3>
402
+
403
+ <h5 >1. Support image ref image. (Cartoon Ref image is not support now)</h5>
404
+ <h5 >2. Support Typesetting Style and Captioning.(By default, the prompt is used as the caption for each image. If you need to change the caption, add a # at the end of each line. Only the part after the # will be added as a caption to the image.)</h5>
405
+ <h5 >3. [NC]symbol (The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the "[NC]" at the beginning of the line. For example, to generate a scene of falling leaves without any character, write: "[NC] The leaves are falling."),Currently, support is only using Textual Description</h5>
406
+ <h5 align="center">Tips: Not Ready Now! Just Test</h5>
407
+ """
408
+ #################################################
409
+ global attn_count, total_count, id_length, total_length,cur_step, cur_model_type
410
+ global write
411
+ global sa32, sa64
412
+ global height,width
413
+ attn_count = 0
414
+ total_count = 0
415
+ cur_step = 0
416
+ id_length = 4
417
+ total_length = 5
418
+ cur_model_type = ""
419
+ device="cuda"
420
+ global attn_procs,unet
421
+ attn_procs = {}
422
+ ###
423
+ write = False
424
+ ###
425
+ sa32 = 0.5
426
+ sa64 = 0.5
427
+ height = 768
428
+ width = 768
429
+ ###
430
+ global sd_model_path
431
+ sd_model_path = models_dict["Unstable"]#"SG161222/RealVisXL_V4.0"
432
+ use_safetensors= False
433
+ ### LOAD Stable Diffusion Pipeline
434
+ pipe1 = StableDiffusionXLPipeline.from_pretrained(sd_model_path, torch_dtype=torch.float16, use_safetensors= use_safetensors)
435
+ pipe1 = pipe1.to("cuda")
436
+ pipe1.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
437
+ # pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
438
+ pipe1.scheduler.set_timesteps(50)
439
+ ###
440
+ pipe2 = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
441
+ sd_model_path, torch_dtype=torch.float16, use_safetensors=use_safetensors)
442
+ pipe2 = pipe2.to("cuda")
443
+ pipe2.load_photomaker_adapter(
444
+ os.path.dirname(photomaker_path),
445
+ subfolder="",
446
+ weight_name=os.path.basename(photomaker_path),
447
+ trigger_word="img" # define the trigger word
448
+ )
449
+ pipe2 = pipe2.to("cuda")
450
+ pipe2.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
451
+ pipe2.fuse_lora()
452
+
453
+ ######### Gradio Fuction #############
454
+
455
+ def swap_to_gallery(images):
456
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
457
+
458
+ def upload_example_to_gallery(images, prompt, style, negative_prompt):
459
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
460
+
461
+ def remove_back_to_files():
462
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
463
+
464
+ def remove_tips():
465
+ return gr.update(visible=False)
466
+
467
+ def apply_style_positive(style_name: str, positive: str):
468
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
469
+ return p.replace("{prompt}", positive)
470
+
471
+ def apply_style(style_name: str, positives: list, negative: str = ""):
472
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
473
+ return [p.replace("{prompt}", positive) for positive in positives], n + ' ' + negative
474
+
475
+ def change_visiale_by_model_type(_model_type):
476
+ if _model_type == "Only Using Textual Description":
477
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False)
478
+ elif _model_type == "Using Ref Images":
479
+ return gr.update(visible=True), gr.update(visible=True), gr.update(visible=False)
480
+ else:
481
+ raise ValueError("Invalid model type",_model_type)
482
+
483
+
484
+ ######### Image Generation ##############
485
+ @spaces.GPU
486
+ def process_generation(_sd_type,_model_type,_upload_images, _num_steps,style_name, _Ip_Adapter_Strength ,_style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt,prompt_array,G_height,G_width,_comic_type):
487
+ _model_type = "Photomaker" if _model_type == "Using Ref Images" else "original"
488
+ if _model_type == "Photomaker" and "img" not in general_prompt:
489
+ raise gr.Error("Please add the triger word \" img \" behind the class word you want to customize, such as: man img or woman img")
490
+ if _upload_images is None and _model_type != "original":
491
+ raise gr.Error(f"Cannot find any input face image!")
492
+ global sa32, sa64,id_length,total_length,attn_procs,unet,cur_model_type,device
493
+ global write
494
+ global cur_step,attn_count
495
+ global height,width
496
+ height = G_height
497
+ width = G_width
498
+ global pipe1,pipe2
499
+ global sd_model_path,models_dict
500
+ sd_model_path = models_dict[_sd_type]
501
+ use_safe_tensor = True
502
+ if _model_type == "original":
503
+ pipe = pipe1
504
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
505
+ elif _model_type == "Photomaker":
506
+ pipe = pipe2
507
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
508
+ else:
509
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
510
+ ##### ########################
511
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
512
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
513
+ cur_model_type = _sd_type+"-"+_model_type+""+str(id_length_)
514
+ if _model_type != "original":
515
+ input_id_images = []
516
+ for img in _upload_images:
517
+ print(img)
518
+ input_id_images.append(load_image(img))
519
+ prompts = prompt_array.splitlines()
520
+ start_merge_step = int(float(_style_strength_ratio) / 100 * _num_steps)
521
+ if start_merge_step > 30:
522
+ start_merge_step = 30
523
+ print(f"start_merge_step:{start_merge_step}")
524
+ generator = torch.Generator(device="cuda").manual_seed(seed_)
525
+ sa32, sa64 = sa32_, sa64_
526
+ id_length = id_length_
527
+ clipped_prompts = prompts[:]
528
+ prompts = [general_prompt + "," + prompt if "[NC]" not in prompt else prompt.replace("[NC]","") for prompt in clipped_prompts]
529
+ prompts = [prompt.rpartition('#')[0] if "#" in prompt else prompt for prompt in prompts]
530
+ print(prompts)
531
+ id_prompts = prompts[:id_length]
532
+ real_prompts = prompts[id_length:]
533
+ torch.cuda.empty_cache()
534
+ write = True
535
+ cur_step = 0
536
+
537
+ attn_count = 0
538
+ id_prompts, negative_prompt = apply_style(style_name, id_prompts, negative_prompt)
539
+ setup_seed(seed_)
540
+ total_results = []
541
+ if _model_type == "original":
542
+ id_images = pipe(id_prompts, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
543
+ elif _model_type == "Photomaker":
544
+ id_images = pipe(id_prompts,input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
545
+ else:
546
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
547
+ total_results = id_images + total_results
548
+ yield total_results
549
+ real_images = []
550
+ write = False
551
+ for real_prompt in real_prompts:
552
+ setup_seed(seed_)
553
+ cur_step = 0
554
+ real_prompt = apply_style_positive(style_name, real_prompt)
555
+ if _model_type == "original":
556
+ real_images.append(pipe(real_prompt, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images[0])
557
+ elif _model_type == "Photomaker":
558
+ real_images.append(pipe(real_prompt, input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images[0])
559
+ else:
560
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
561
+ total_results = [real_images[-1]] + total_results
562
+ yield total_results
563
+ if _comic_type != "No typesetting (default)":
564
+ captions= prompt_array.splitlines()
565
+ captions = [caption.replace("[NC]","") for caption in captions]
566
+ captions = [caption.split('#')[-1] if "#" in caption else caption for caption in captions]
567
+ from PIL import ImageFont
568
+ total_results = get_comic(id_images + real_images, _comic_type,captions= captions,font=ImageFont.truetype("./fonts/Inkfree.ttf", int(45))) + total_results
569
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
570
+ yield total_results
571
+
572
+
573
+
574
+ def array2string(arr):
575
+ stringtmp = ""
576
+ for i,part in enumerate(arr):
577
+ if i != len(arr)-1:
578
+ stringtmp += part +"\n"
579
+ else:
580
+ stringtmp += part
581
+
582
+ return stringtmp
583
+
584
+
585
+ #################################################
586
+ #################################################
587
+ ### define the interface
588
+ with gr.Blocks(css=css) as demo:
589
+ binary_matrixes = gr.State([])
590
+ color_layout = gr.State([])
591
+
592
+ # gr.Markdown(logo)
593
+ gr.Markdown(title)
594
+ gr.Markdown(description)
595
+
596
+ with gr.Row():
597
+ with gr.Group(elem_id="main-image"):
598
+ # button_run = gr.Button("generate id images ! 😺", elem_id="main_button", interactive=True)
599
+
600
+ prompts = []
601
+ colors = []
602
+ # with gr.Column(visible=False) as post_sketch:
603
+ # for n in range(MAX_COLORS):
604
+ # if n == 0 :
605
+ # with gr.Row(visible=False) as color_row[n]:
606
+ # colors.append(gr.Image(shape=(100, 100), label="background", type="pil", image_mode="RGB", width=100, height=100))
607
+ # prompts.append(gr.Textbox(label="Prompt for the background (white region)", value=""))
608
+ # else:
609
+ # with gr.Row(visible=False) as color_row[n]:
610
+ # colors.append(gr.Image(shape=(100, 100), label="segment "+str(n), type="pil", image_mode="RGB", width=100, height=100))
611
+ # prompts.append(gr.Textbox(label="Prompt for the segment "+str(n)))
612
+
613
+ # get_genprompt_run = gr.Button("(2) I've finished segment labeling ! 😺", elem_id="prompt_button", interactive=True)
614
+
615
+ with gr.Column(visible=True) as gen_prompt_vis:
616
+ sd_type = gr.Dropdown(choices=list(models_dict.keys()), value = "Unstable",label="sd_type", info="Select pretrained model")
617
+ model_type = gr.Radio(["Only Using Textual Description", "Using Ref Images"], label="model_type", value = "Only Using Textual Description", info="Control type of the Character")
618
+ with gr.Group(visible=False) as control_image_input:
619
+ files = gr.Files(
620
+ label="Drag (Select) 1 or more photos of your face",
621
+ file_types=["image"],
622
+ )
623
+ uploaded_files = gr.Gallery(label="Your images", visible=False, columns=5, rows=1, height=200)
624
+ with gr.Column(visible=False) as clear_button:
625
+ remove_and_reupload = gr.ClearButton(value="Remove and upload new ones", components=files, size="sm")
626
+ general_prompt = gr.Textbox(value='', label="(1) Textual Description for Character", interactive=True)
627
+ negative_prompt = gr.Textbox(value='', label="(2) Negative_prompt", interactive=True)
628
+ style = gr.Dropdown(label="Style template", choices=STYLE_NAMES, value=DEFAULT_STYLE_NAME)
629
+ prompt_array = gr.Textbox(lines = 3,value='', label="(3) Comic Description (each line corresponds to a frame).", interactive=True)
630
+ with gr.Accordion("(4) Tune the hyperparameters", open=True):
631
+ #sa16_ = gr.Slider(label=" (The degree of Paired Attention at 16 x 16 self-attention layers) ", minimum=0, maximum=1., value=0.3, step=0.1)
632
+ sa32_ = gr.Slider(label=" (The degree of Paired Attention at 32 x 32 self-attention layers) ", minimum=0, maximum=1., value=0.7, step=0.1)
633
+ sa64_ = gr.Slider(label=" (The degree of Paired Attention at 64 x 64 self-attention layers) ", minimum=0, maximum=1., value=0.7, step=0.1)
634
+ id_length_ = gr.Slider(label= "Number of id images in total images" , minimum=2, maximum=4, value=2, step=1)
635
+ # total_length_ = gr.Slider(label= "Number of total images", minimum=1, maximum=20, value=1, step=1)
636
+ seed_ = gr.Slider(label="Seed", minimum=-1, maximum=MAX_SEED, value=0, step=1)
637
+ num_steps = gr.Slider(
638
+ label="Number of sample steps",
639
+ minimum=20,
640
+ maximum=100,
641
+ step=1,
642
+ value=50,
643
+ )
644
+ G_height = gr.Slider(
645
+ label="height",
646
+ minimum=256,
647
+ maximum=1024,
648
+ step=32,
649
+ value=768,
650
+ )
651
+ G_width = gr.Slider(
652
+ label="width",
653
+ minimum=256,
654
+ maximum=1024,
655
+ step=32,
656
+ value=768,
657
+ )
658
+ comic_type = gr.Radio(["No typesetting (default)", "Four Pannel", "Classic Comic Style"], value = "Classic Comic Style", label="Typesetting Style", info="Select the typesetting style ")
659
+ guidance_scale = gr.Slider(
660
+ label="Guidance scale",
661
+ minimum=0.1,
662
+ maximum=10.0,
663
+ step=0.1,
664
+ value=5,
665
+ )
666
+ style_strength_ratio = gr.Slider(
667
+ label="Style strength of Ref Image (%)",
668
+ minimum=15,
669
+ maximum=50,
670
+ step=1,
671
+ value=20,
672
+ visible=False
673
+ )
674
+ Ip_Adapter_Strength = gr.Slider(
675
+ label="Ip_Adapter_Strength",
676
+ minimum=0,
677
+ maximum=1,
678
+ step=0.1,
679
+ value=0.5,
680
+ visible=False
681
+ )
682
+ final_run_btn = gr.Button("Generate ! 😺")
683
+
684
+
685
+ with gr.Column():
686
+ out_image = gr.Gallery(label="Result", columns=2, height='auto')
687
+ generated_information = gr.Markdown(label="Generation Details", value="",visible=False)
688
+ gr.Markdown(version)
689
+ model_type.change(fn = change_visiale_by_model_type , inputs = model_type, outputs=[control_image_input,style_strength_ratio,Ip_Adapter_Strength])
690
+ files.upload(fn=swap_to_gallery, inputs=files, outputs=[uploaded_files, clear_button, files])
691
+ remove_and_reupload.click(fn=remove_back_to_files, outputs=[uploaded_files, clear_button, files])
692
+
693
+ final_run_btn.click(fn=set_text_unfinished, outputs = generated_information
694
+ ).then(process_generation, inputs=[sd_type,model_type,files, num_steps,style, Ip_Adapter_Strength,style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,G_height,G_width,comic_type], outputs=out_image
695
+ ).then(fn=set_text_finished,outputs = generated_information)
696
+
697
+
698
+ gr.Examples(
699
+ examples=[
700
+ [1,0.5,0.5,3,"a woman img, wearing a white T-shirt, blue loose hair",
701
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
702
+ array2string(["wake up in the bed",
703
+ "have breakfast",
704
+ "is on the road, go to company",
705
+ "work in the company",
706
+ "Take a walk next to the company at noon",
707
+ "lying in bed at night"]),
708
+ "Japanese Anime", "Using Ref Images",get_image_path_list('./examples/taylor'),768,768
709
+ ],
710
+ [0,0.5,0.5,2,"a man, wearing black jacket",
711
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
712
+ array2string(["wake up in the bed",
713
+ "have breakfast",
714
+ "is on the road, go to the company, close look",
715
+ "work in the company",
716
+ "laughing happily",
717
+ "lying in bed at night"
718
+ ]),
719
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
720
+ ],
721
+ [0,0.3,0.5,2,"a girl, wearing white shirt, black skirt, black tie, yellow hair",
722
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
723
+ array2string([
724
+ "at home #at home, began to go to drawing",
725
+ "sitting alone on a park bench.",
726
+ "reading a book on a park bench.",
727
+ "[NC]A squirrel approaches, peeking over the bench. ",
728
+ "look around in the park. # She looks around and enjoys the beauty of nature.",
729
+ "[NC]leaf falls from the tree, landing on the sketchbook.",
730
+ "picks up the leaf, examining its details closely.",
731
+ "starts sketching the leaf with intricate lines.",
732
+ "holds up the sketch drawing of the leaf.",
733
+ "[NC]The brown squirrel appear.",
734
+ "is very happy # She is very happy to see the squirrel again",
735
+ "[NC]The brown squirrel takes the cracker and scampers up a tree. # She gives the squirrel cracker",
736
+ "laughs and tucks the leaf into her book as a keepsake.",
737
+ "ready to leave.",]),
738
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
739
+ ]
740
+ ],
741
+ inputs=[seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,style,model_type,files,G_height,G_width],
742
+ # outputs=[post_sketch, binary_matrixes, *color_row, *colors, *prompts, gen_prompt_vis, general_prompt, seed_],
743
+ # run_on_click=True,
744
+ label='😺 Examples 😺',
745
+ )
746
+ gr.Markdown(article)
747
+
748
+ # demo.load(None, None, None, _js=load_js)
749
+
750
+ demo.launch(server_name="0.0.0.0", share = True if use_va else False)
cog.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Configuration for Cog ⚙️
2
+ # Reference: https://cog.run/yaml
3
+
4
+ build:
5
+ gpu: true
6
+ system_packages:
7
+ - "libgl1-mesa-glx"
8
+ - "libglib2.0-0"
9
+ python_version: "3.11"
10
+ python_packages:
11
+ - xformers==0.0.20
12
+ - torch==2.0.1
13
+ - torchvision==0.15.2
14
+ - diffusers==0.25.0
15
+ - transformers==4.36.2
16
+ - gradio==3.48.0
17
+ - accelerate
18
+ - safetensors
19
+ - peft
20
+ - Pillow==9.5.0
21
+ run:
22
+ - curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releases/download/v0.6.0/pget_linux_x86_64" && chmod +x /usr/local/bin/pget
23
+ predict: "predict.py:Predictor"
config/models.yaml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Juggernaut:
2
+ path: "https://huggingface.co/RunDiffusion/Juggernaut-XL-v9/blob/main/Juggernaut-XL_v9_RunDiffusionPhoto_v2.safetensors"
3
+ single_files: true ### if true, is a civitai model
4
+ use_safetensors: true
5
+
6
+ Dreamshaper:
7
+ path: "https://huggingface.co/Lykon/DreamShaper/blob/main/DreamShaperXL_Turbo_SFWdpmppSde_half_pruned.safetensors"
8
+ single_files: true ### if true, is a civitai model
9
+ use_safetensors: true
10
+
11
+
12
+ RealVision:
13
+ path: "SG161222/RealVisXL_V4.0"
14
+ single_files: false
15
+ use_safetensors: true
16
+
17
+ SDXL:
18
+ path: "stabilityai/stable-diffusion-xl-base-1.0"
19
+ single_files: false
20
+ use_safetensors: true
21
+
22
+
23
+ Unstable:
24
+ path: "stablediffusionapi/sdxl-unstable-diffusers-y"
25
+ single_files: false
26
+ use_safetensors: false
data/photomaker-v1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:529d503fa378bfb3a74e3384ab2064d7269d59f0638324555d22067c31e275bc
3
+ size 934103417
examples/Robert/images.jpeg ADDED
examples/lecun/yann-lecun2.png ADDED
examples/taylor/1-1.png ADDED
examples/twoperson/1.jpeg ADDED
examples/twoperson/2.png ADDED
fonts/Inkfree.ttf ADDED
Binary file (41.2 kB). View file
 
gradio_app_sdxl_specific_id_low_vram.py ADDED
@@ -0,0 +1,1345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from this import d
2
+ import gradio as gr
3
+ import numpy as np
4
+ import torch
5
+ import gc
6
+ import copy
7
+ import os
8
+ import random
9
+ import datetime
10
+ from PIL import ImageFont
11
+ from utils.gradio_utils import (
12
+ character_to_dict,
13
+ process_original_prompt,
14
+ get_ref_character,
15
+ cal_attn_mask_xl,
16
+ cal_attn_indice_xl_effcient_memory,
17
+ is_torch2_available,
18
+ )
19
+
20
+ if is_torch2_available():
21
+ from utils.gradio_utils import AttnProcessor2_0 as AttnProcessor
22
+ else:
23
+ from utils.gradio_utils import AttnProcessor
24
+ from huggingface_hub import hf_hub_download
25
+ from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl import (
26
+ StableDiffusionXLPipeline,
27
+ )
28
+ from diffusers.schedulers.scheduling_ddim import DDIMScheduler
29
+ import torch.nn.functional as F
30
+ from diffusers.utils.loading_utils import load_image
31
+ from utils.utils import get_comic
32
+ from utils.style_template import styles
33
+ from utils.load_models_utils import get_models_dict, load_models
34
+
35
+ STYLE_NAMES = list(styles.keys())
36
+ DEFAULT_STYLE_NAME = "Japanese Anime"
37
+ global models_dict
38
+
39
+ models_dict = get_models_dict()
40
+
41
+ # Automatically select the device
42
+ device = (
43
+ "cuda"
44
+ if torch.cuda.is_available()
45
+ else "mps" if torch.backends.mps.is_available() else "cpu"
46
+ )
47
+ print(f"@@device:{device}")
48
+
49
+
50
+ # check if the file exists locally at a specified path before downloading it.
51
+ # if the file doesn't exist, it uses `hf_hub_download` to download the file
52
+ # and optionally move it to a specific directory. If the file already
53
+ # exists, it simply uses the local path.
54
+ local_dir = "data/"
55
+ photomaker_local_path = f"{local_dir}photomaker-v1.bin"
56
+ if not os.path.exists(photomaker_local_path):
57
+ photomaker_path = hf_hub_download(
58
+ repo_id="TencentARC/PhotoMaker",
59
+ filename="photomaker-v1.bin",
60
+ repo_type="model",
61
+ local_dir=local_dir,
62
+ )
63
+ else:
64
+ photomaker_path = photomaker_local_path
65
+
66
+ MAX_SEED = np.iinfo(np.int32).max
67
+
68
+
69
+ def setup_seed(seed):
70
+ torch.manual_seed(seed)
71
+ if device == "cuda":
72
+ torch.cuda.manual_seed_all(seed)
73
+ np.random.seed(seed)
74
+ random.seed(seed)
75
+ torch.backends.cudnn.deterministic = True
76
+
77
+
78
+ def set_text_unfinished():
79
+ return gr.update(
80
+ visible=True,
81
+ value="<h3>(Not Finished) Generating ··· The intermediate results will be shown.</h3>",
82
+ )
83
+
84
+
85
+ def set_text_finished():
86
+ return gr.update(visible=True, value="<h3>Generation Finished</h3>")
87
+
88
+
89
+ #################################################
90
+ def get_image_path_list(folder_name):
91
+ image_basename_list = os.listdir(folder_name)
92
+ image_path_list = sorted(
93
+ [os.path.join(folder_name, basename) for basename in image_basename_list]
94
+ )
95
+ return image_path_list
96
+
97
+
98
+ #################################################
99
+ class SpatialAttnProcessor2_0(torch.nn.Module):
100
+ r"""
101
+ Attention processor for IP-Adapater for PyTorch 2.0.
102
+ Args:
103
+ hidden_size (`int`):
104
+ The hidden size of the attention layer.
105
+ cross_attention_dim (`int`):
106
+ The number of channels in the `encoder_hidden_states`.
107
+ text_context_len (`int`, defaults to 77):
108
+ The context length of the text features.
109
+ scale (`float`, defaults to 1.0):
110
+ the weight scale of image prompt.
111
+ """
112
+
113
+ def __init__(
114
+ self,
115
+ hidden_size=None,
116
+ cross_attention_dim=None,
117
+ id_length=4,
118
+ device=device,
119
+ dtype=torch.float16,
120
+ ):
121
+ super().__init__()
122
+ if not hasattr(F, "scaled_dot_product_attention"):
123
+ raise ImportError(
124
+ "AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0."
125
+ )
126
+ self.device = device
127
+ self.dtype = dtype
128
+ self.hidden_size = hidden_size
129
+ self.cross_attention_dim = cross_attention_dim
130
+ self.total_length = id_length + 1
131
+ self.id_length = id_length
132
+ self.id_bank = {}
133
+
134
+ def __call__(
135
+ self,
136
+ attn,
137
+ hidden_states,
138
+ encoder_hidden_states=None,
139
+ attention_mask=None,
140
+ temb=None,
141
+ ):
142
+ # un_cond_hidden_states, cond_hidden_states = hidden_states.chunk(2)
143
+ # un_cond_hidden_states = self.__call2__(attn, un_cond_hidden_states,encoder_hidden_states,attention_mask,temb)
144
+ # 生成一个0到1之间的随机数
145
+ global total_count, attn_count, cur_step, indices1024, indices4096
146
+ global sa32, sa64
147
+ global write
148
+ global height, width
149
+ global character_dict, character_index_dict, invert_character_index_dict, cur_character, ref_indexs_dict, ref_totals, cur_character
150
+ if attn_count == 0 and cur_step == 0:
151
+ indices1024, indices4096 = cal_attn_indice_xl_effcient_memory(
152
+ self.total_length,
153
+ self.id_length,
154
+ sa32,
155
+ sa64,
156
+ height,
157
+ width,
158
+ device=self.device,
159
+ dtype=self.dtype,
160
+ )
161
+ if write:
162
+ assert len(cur_character) == 1
163
+ if hidden_states.shape[1] == (height // 32) * (width // 32):
164
+ indices = indices1024
165
+ else:
166
+ indices = indices4096
167
+ # print(f"white:{cur_step}")
168
+ total_batch_size, nums_token, channel = hidden_states.shape
169
+ img_nums = total_batch_size // 2
170
+ hidden_states = hidden_states.reshape(-1, img_nums, nums_token, channel)
171
+ # print(img_nums,len(indices),hidden_states.shape,self.total_length)
172
+ if cur_character[0] not in self.id_bank:
173
+ self.id_bank[cur_character[0]] = {}
174
+ self.id_bank[cur_character[0]][cur_step] = [
175
+ hidden_states[:, img_ind, indices[img_ind], :]
176
+ .reshape(2, -1, channel)
177
+ .clone()
178
+ for img_ind in range(img_nums)
179
+ ]
180
+ hidden_states = hidden_states.reshape(-1, nums_token, channel)
181
+ # self.id_bank[cur_step] = [hidden_states[:self.id_length].clone(), hidden_states[self.id_length:].clone()]
182
+ else:
183
+ # encoder_hidden_states = torch.cat((self.id_bank[cur_step][0].to(self.device),self.id_bank[cur_step][1].to(self.device)))
184
+ # TODO: ADD Multipersion Control
185
+ encoder_arr = []
186
+ for character in cur_character:
187
+ encoder_arr = encoder_arr + [
188
+ tensor.to(self.device)
189
+ for tensor in self.id_bank[character][cur_step]
190
+ ]
191
+ # 判断随机数是否大于0.5
192
+ if cur_step < 1:
193
+ hidden_states = self.__call2__(
194
+ attn, hidden_states, None, attention_mask, temb
195
+ )
196
+ else: # 256 1024 4096
197
+ random_number = random.random()
198
+ if cur_step < 20:
199
+ rand_num = 0.3
200
+ else:
201
+ rand_num = 0.1
202
+ # print(f"hidden state shape {hidden_states.shape[1]}")
203
+ if random_number > rand_num:
204
+ if hidden_states.shape[1] == (height // 32) * (width // 32):
205
+ indices = indices1024
206
+ else:
207
+ indices = indices4096
208
+ # print("before attention",hidden_states.shape,attention_mask.shape,encoder_hidden_states.shape if encoder_hidden_states is not None else "None")
209
+ if write:
210
+ total_batch_size, nums_token, channel = hidden_states.shape
211
+ img_nums = total_batch_size // 2
212
+ hidden_states = hidden_states.reshape(
213
+ -1, img_nums, nums_token, channel
214
+ )
215
+ encoder_arr = [
216
+ hidden_states[:, img_ind, indices[img_ind], :].reshape(
217
+ 2, -1, channel
218
+ )
219
+ for img_ind in range(img_nums)
220
+ ]
221
+ for img_ind in range(img_nums):
222
+ # print(img_nums)
223
+ # assert img_nums != 1
224
+ img_ind_list = [i for i in range(img_nums)]
225
+ # print(img_ind_list,img_ind)
226
+ img_ind_list.remove(img_ind)
227
+ # print(img_ind,invert_character_index_dict[img_ind])
228
+ # print(character_index_dict[invert_character_index_dict[img_ind]])
229
+ # print(img_ind_list)
230
+ # print(img_ind,img_ind_list)
231
+ encoder_hidden_states_tmp = torch.cat(
232
+ [encoder_arr[img_ind] for img_ind in img_ind_list]
233
+ + [hidden_states[:, img_ind, :, :]],
234
+ dim=1,
235
+ )
236
+
237
+ hidden_states[:, img_ind, :, :] = self.__call2__(
238
+ attn,
239
+ hidden_states[:, img_ind, :, :],
240
+ encoder_hidden_states_tmp,
241
+ None,
242
+ temb,
243
+ )
244
+ else:
245
+ _, nums_token, channel = hidden_states.shape
246
+ # img_nums = total_batch_size // 2
247
+ # encoder_hidden_states = encoder_hidden_states.reshape(-1,img_nums,nums_token,channel)
248
+ hidden_states = hidden_states.reshape(2, -1, nums_token, channel)
249
+ # print(len(indices))
250
+ # encoder_arr = [encoder_hidden_states[:,img_ind,indices[img_ind],:].reshape(2,-1,channel) for img_ind in range(img_nums)]
251
+ encoder_hidden_states_tmp = torch.cat(
252
+ encoder_arr + [hidden_states[:, 0, :, :]], dim=1
253
+ )
254
+ # print(len(encoder_arr),encoder_hidden_states_tmp.shape)
255
+ hidden_states[:, 0, :, :] = self.__call2__(
256
+ attn,
257
+ hidden_states[:, 0, :, :],
258
+ encoder_hidden_states_tmp,
259
+ None,
260
+ temb,
261
+ )
262
+ hidden_states = hidden_states.reshape(-1, nums_token, channel)
263
+ else:
264
+ hidden_states = self.__call2__(
265
+ attn, hidden_states, None, attention_mask, temb
266
+ )
267
+ attn_count += 1
268
+ if attn_count == total_count:
269
+ attn_count = 0
270
+ cur_step += 1
271
+ indices1024, indices4096 = cal_attn_indice_xl_effcient_memory(
272
+ self.total_length,
273
+ self.id_length,
274
+ sa32,
275
+ sa64,
276
+ height,
277
+ width,
278
+ device=self.device,
279
+ dtype=self.dtype,
280
+ )
281
+
282
+ return hidden_states
283
+
284
+ def __call2__(
285
+ self,
286
+ attn,
287
+ hidden_states,
288
+ encoder_hidden_states=None,
289
+ attention_mask=None,
290
+ temb=None,
291
+ ):
292
+ residual = hidden_states
293
+
294
+ if attn.spatial_norm is not None:
295
+ hidden_states = attn.spatial_norm(hidden_states, temb)
296
+
297
+ input_ndim = hidden_states.ndim
298
+
299
+ if input_ndim == 4:
300
+ batch_size, channel, height, width = hidden_states.shape
301
+ hidden_states = hidden_states.view(
302
+ batch_size, channel, height * width
303
+ ).transpose(1, 2)
304
+
305
+ batch_size, sequence_length, channel = hidden_states.shape
306
+ # print(hidden_states.shape)
307
+ if attention_mask is not None:
308
+ attention_mask = attn.prepare_attention_mask(
309
+ attention_mask, sequence_length, batch_size
310
+ )
311
+ # scaled_dot_product_attention expects attention_mask shape to be
312
+ # (batch, heads, source_length, target_length)
313
+ attention_mask = attention_mask.view(
314
+ batch_size, attn.heads, -1, attention_mask.shape[-1]
315
+ )
316
+
317
+ if attn.group_norm is not None:
318
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(
319
+ 1, 2
320
+ )
321
+
322
+ query = attn.to_q(hidden_states)
323
+
324
+ if encoder_hidden_states is None:
325
+ encoder_hidden_states = hidden_states # B, N, C
326
+ # else:
327
+ # encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,sequence_length,channel).reshape(-1,(self.id_length+1) * sequence_length,channel)
328
+
329
+ key = attn.to_k(encoder_hidden_states)
330
+ value = attn.to_v(encoder_hidden_states)
331
+
332
+ inner_dim = key.shape[-1]
333
+ head_dim = inner_dim // attn.heads
334
+
335
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
336
+
337
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
338
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
339
+
340
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
341
+ # TODO: add support for attn.scale when we move to Torch 2.1
342
+ hidden_states = F.scaled_dot_product_attention(
343
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
344
+ )
345
+
346
+ hidden_states = hidden_states.transpose(1, 2).reshape(
347
+ batch_size, -1, attn.heads * head_dim
348
+ )
349
+ hidden_states = hidden_states.to(query.dtype)
350
+
351
+ # linear proj
352
+ hidden_states = attn.to_out[0](hidden_states)
353
+ # dropout
354
+ hidden_states = attn.to_out[1](hidden_states)
355
+
356
+ if input_ndim == 4:
357
+ hidden_states = hidden_states.transpose(-1, -2).reshape(
358
+ batch_size, channel, height, width
359
+ )
360
+
361
+ if attn.residual_connection:
362
+ hidden_states = hidden_states + residual
363
+
364
+ hidden_states = hidden_states / attn.rescale_output_factor
365
+
366
+ return hidden_states
367
+
368
+
369
+ def set_attention_processor(unet, id_length, is_ipadapter=False):
370
+ global attn_procs
371
+ attn_procs = {}
372
+ for name in unet.attn_processors.keys():
373
+ cross_attention_dim = (
374
+ None
375
+ if name.endswith("attn1.processor")
376
+ else unet.config.cross_attention_dim
377
+ )
378
+ if name.startswith("mid_block"):
379
+ hidden_size = unet.config.block_out_channels[-1]
380
+ elif name.startswith("up_blocks"):
381
+ block_id = int(name[len("up_blocks.")])
382
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
383
+ elif name.startswith("down_blocks"):
384
+ block_id = int(name[len("down_blocks.")])
385
+ hidden_size = unet.config.block_out_channels[block_id]
386
+ if cross_attention_dim is None:
387
+ if name.startswith("up_blocks"):
388
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length=id_length)
389
+ else:
390
+ attn_procs[name] = AttnProcessor()
391
+ else:
392
+ if is_ipadapter:
393
+ attn_procs[name] = IPAttnProcessor2_0(
394
+ hidden_size=hidden_size,
395
+ cross_attention_dim=cross_attention_dim,
396
+ scale=1,
397
+ num_tokens=4,
398
+ ).to(unet.device, dtype=torch.float16)
399
+ else:
400
+ attn_procs[name] = AttnProcessor()
401
+
402
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
403
+
404
+
405
+ #################################################
406
+ #################################################
407
+ canvas_html = "<div id='canvas-root' style='max-width:400px; margin: 0 auto'></div>"
408
+ load_js = """
409
+ async () => {
410
+ const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/sketch-canvas.js"
411
+ fetch(url)
412
+ .then(res => res.text())
413
+ .then(text => {
414
+ const script = document.createElement('script');
415
+ script.type = "module"
416
+ script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
417
+ document.head.appendChild(script);
418
+ });
419
+ }
420
+ """
421
+
422
+ get_js_colors = """
423
+ async (canvasData) => {
424
+ const canvasEl = document.getElementById("canvas-root");
425
+ return [canvasEl._data]
426
+ }
427
+ """
428
+
429
+ css = """
430
+ #color-bg{display:flex;justify-content: center;align-items: center;}
431
+ .color-bg-item{width: 100%; height: 32px}
432
+ #main_button{width:100%}
433
+ <style>
434
+ """
435
+
436
+
437
+ def save_single_character_weights(unet, character, description, filepath):
438
+ """
439
+ 保存 attention_processor 类中的 id_bank GPU Tensor 列表到指定文件中。
440
+ 参数:
441
+ - model: 包含 attention_processor 类实例的模型。
442
+ - filepath: 权重要保存到的文件路径。
443
+ """
444
+ weights_to_save = {}
445
+ weights_to_save["description"] = description
446
+ weights_to_save["character"] = character
447
+ for attn_name, attn_processor in unet.attn_processors.items():
448
+ if isinstance(attn_processor, SpatialAttnProcessor2_0):
449
+ # 将每个 Tensor 转到 CPU 并转为列表,以确保它可以被序列化
450
+ weights_to_save[attn_name] = {}
451
+ for step_key in attn_processor.id_bank[character].keys():
452
+ weights_to_save[attn_name][step_key] = [
453
+ tensor.cpu()
454
+ for tensor in attn_processor.id_bank[character][step_key]
455
+ ]
456
+ # 使用torch.save保存权重
457
+ torch.save(weights_to_save, filepath)
458
+
459
+
460
+ def load_single_character_weights(unet, filepath):
461
+ """
462
+ 从指定文件中加载权重到 attention_processor 类的 id_bank 中。
463
+ 参数:
464
+ - model: 包含 attention_processor 类实例的模型。
465
+ - filepath: 权重文件的路径。
466
+ """
467
+ # 使用torch.load来读取权重
468
+ weights_to_load = torch.load(filepath, map_location=torch.device("cpu"))
469
+ character = weights_to_load["character"]
470
+ description = weights_to_load["description"]
471
+ for attn_name, attn_processor in unet.attn_processors.items():
472
+ if isinstance(attn_processor, SpatialAttnProcessor2_0):
473
+ # 转移权重到GPU(如果GPU可用的话)并赋值给id_bank
474
+ attn_processor.id_bank[character] = {}
475
+ for step_key in weights_to_load[attn_name].keys():
476
+ attn_processor.id_bank[character][step_key] = [
477
+ tensor.to(unet.device)
478
+ for tensor in weights_to_load[attn_name][step_key]
479
+ ]
480
+
481
+
482
+ def save_results(unet, img_list):
483
+
484
+ timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
485
+ folder_name = f"results/{timestamp}"
486
+ weight_folder_name = f"{folder_name}/weights"
487
+ # 创建文件夹
488
+ if not os.path.exists(folder_name):
489
+ os.makedirs(folder_name)
490
+ os.makedirs(weight_folder_name)
491
+
492
+ for idx, img in enumerate(img_list):
493
+ file_path = os.path.join(folder_name, f"image_{idx}.png") # 图片文件名
494
+ img.save(file_path)
495
+ global character_dict
496
+ # for char in character_dict:
497
+ # description = character_dict[char]
498
+ # save_single_character_weights(unet,char,description,os.path.join(weight_folder_name, f'{char}.pt'))
499
+
500
+
501
+ #################################################
502
+ title = r"""
503
+ <h1 align="center">StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</h1>
504
+ """
505
+
506
+ description = r"""
507
+ <b>Official 🤗 Gradio demo</b> for <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'><b>StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</b></a>.<br>
508
+ ❗️❗️❗️[<b>Important</b>] Personalization steps:<br>
509
+ 1️⃣ Enter a Textual Description for Character, if you add the Ref-Image, making sure to <b>follow the class word</b> you want to customize with the <b>trigger word</b>: `img`, such as: `man img` or `woman img` or `girl img`.<br>
510
+ 2️⃣ Enter the prompt array, each line corrsponds to one generated image.<br>
511
+ 3️⃣ Choose your preferred style template.<br>
512
+ 4️⃣ Click the <b>Submit</b> button to start customizing.
513
+ """
514
+
515
+ article = r"""
516
+
517
+ If StoryDiffusion is helpful, please help to ⭐ the <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'>Github Repo</a>. Thanks!
518
+ [![GitHub Stars](https://img.shields.io/github/stars/HVision-NKU/StoryDiffusion?style=social)](https://github.com/HVision-NKU/StoryDiffusion)
519
+ ---
520
+ 📝 **Citation**
521
+ <br>
522
+ If our work is useful for your research, please consider citing:
523
+
524
+ ```bibtex
525
+ @article{Zhou2024storydiffusion,
526
+ title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
527
+ author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
528
+ year={2024}
529
+ }
530
+ ```
531
+ 📋 **License**
532
+ <br>
533
+ Apache-2.0 LICENSE.
534
+
535
+ 📧 **Contact**
536
+ <br>
537
+ If you have any questions, please feel free to reach me out at <b>[email protected]</b>.
538
+ """
539
+ version = r"""
540
+ <h3 align="center">StoryDiffusion Version 0.02 (test version)</h3>
541
+
542
+ <h5 >1. Support image ref image. (Cartoon Ref image is not support now)</h5>
543
+ <h5 >2. Support Typesetting Style and Captioning.(By default, the prompt is used as the caption for each image. If you need to change the caption, add a # at the end of each line. Only the part after the # will be added as a caption to the image.)</h5>
544
+ <h5 >3. [NC]symbol (The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the "[NC]" at the beginning of the line. For example, to generate a scene of falling leaves without any character, write: "[NC] The leaves are falling.")</h5>
545
+ <h5 align="center">Tips: </h4>
546
+ """
547
+ #################################################
548
+ global attn_count, total_count, id_length, total_length, cur_step, cur_model_type
549
+ global write
550
+ global sa32, sa64
551
+ global height, width
552
+ attn_count = 0
553
+ total_count = 0
554
+ cur_step = 0
555
+ id_length = 4
556
+ total_length = 5
557
+ cur_model_type = ""
558
+ global attn_procs, unet
559
+ attn_procs = {}
560
+ ###
561
+ write = False
562
+ ###
563
+ sa32 = 0.5
564
+ sa64 = 0.5
565
+ height = 768
566
+ width = 768
567
+ ###
568
+ global pipe
569
+ global sd_model_path
570
+ pipe = None
571
+ sd_model_path = models_dict["Unstable"]["path"] # "SG161222/RealVisXL_V4.0"
572
+ single_files = models_dict["Unstable"]["single_files"]
573
+ ### LOAD Stable Diffusion Pipeline
574
+ if single_files:
575
+ pipe = StableDiffusionXLPipeline.from_single_file(
576
+ sd_model_path, torch_dtype=torch.float16
577
+ )
578
+ else:
579
+ pipe = StableDiffusionXLPipeline.from_pretrained(
580
+ sd_model_path, torch_dtype=torch.float16, use_safetensors=False
581
+ )
582
+ pipe = pipe.to(device)
583
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
584
+ # pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
585
+ pipe.scheduler.set_timesteps(50)
586
+ pipe.enable_vae_slicing()
587
+ if device != "mps":
588
+ pipe.enable_model_cpu_offload()
589
+ unet = pipe.unet
590
+ cur_model_type = "Unstable" + "-" + "original"
591
+ ### Insert PairedAttention
592
+ for name in unet.attn_processors.keys():
593
+ cross_attention_dim = (
594
+ None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
595
+ )
596
+ if name.startswith("mid_block"):
597
+ hidden_size = unet.config.block_out_channels[-1]
598
+ elif name.startswith("up_blocks"):
599
+ block_id = int(name[len("up_blocks.")])
600
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
601
+ elif name.startswith("down_blocks"):
602
+ block_id = int(name[len("down_blocks.")])
603
+ hidden_size = unet.config.block_out_channels[block_id]
604
+ if cross_attention_dim is None and (name.startswith("up_blocks")):
605
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length=id_length)
606
+ total_count += 1
607
+ else:
608
+ attn_procs[name] = AttnProcessor()
609
+ print("successsfully load paired self-attention")
610
+ print(f"number of the processor : {total_count}")
611
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
612
+ global mask1024, mask4096
613
+ mask1024, mask4096 = cal_attn_mask_xl(
614
+ total_length,
615
+ id_length,
616
+ sa32,
617
+ sa64,
618
+ height,
619
+ width,
620
+ device=device,
621
+ dtype=torch.float16,
622
+ )
623
+
624
+ ######### Gradio Fuction #############
625
+
626
+
627
+ def swap_to_gallery(images):
628
+ return (
629
+ gr.update(value=images, visible=True),
630
+ gr.update(visible=True),
631
+ gr.update(visible=False),
632
+ )
633
+
634
+
635
+ def upload_example_to_gallery(images, prompt, style, negative_prompt):
636
+ return (
637
+ gr.update(value=images, visible=True),
638
+ gr.update(visible=True),
639
+ gr.update(visible=False),
640
+ )
641
+
642
+
643
+ def remove_back_to_files():
644
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
645
+
646
+
647
+ def remove_tips():
648
+ return gr.update(visible=False)
649
+
650
+
651
+ def apply_style_positive(style_name: str, positive: str):
652
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
653
+ return p.replace("{prompt}", positive)
654
+
655
+
656
+ def apply_style(style_name: str, positives: list, negative: str = ""):
657
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
658
+ return [
659
+ p.replace("{prompt}", positive) for positive in positives
660
+ ], n + " " + negative
661
+
662
+
663
+ def change_visiale_by_model_type(_model_type):
664
+ if _model_type == "Only Using Textual Description":
665
+ return (
666
+ gr.update(visible=False),
667
+ gr.update(visible=False),
668
+ gr.update(visible=False),
669
+ )
670
+ elif _model_type == "Using Ref Images":
671
+ return (
672
+ gr.update(visible=True),
673
+ gr.update(visible=True),
674
+ gr.update(visible=False),
675
+ )
676
+ else:
677
+ raise ValueError("Invalid model type", _model_type)
678
+
679
+
680
+ def load_character_files(character_files: str):
681
+ if character_files == "":
682
+ raise gr.Error("Please set a character file!")
683
+ character_files_arr = character_files.splitlines()
684
+ primarytext = []
685
+ for character_file_name in character_files_arr:
686
+ character_file = torch.load(
687
+ character_file_name, map_location=torch.device("cpu")
688
+ )
689
+ primarytext.append(character_file["character"] + character_file["description"])
690
+ return array2string(primarytext)
691
+
692
+
693
+ def load_character_files_on_running(unet, character_files: str):
694
+ if character_files == "":
695
+ return False
696
+ character_files_arr = character_files.splitlines()
697
+ for character_file in character_files_arr:
698
+ load_single_character_weights(unet, character_file)
699
+ return True
700
+
701
+
702
+ ######### Image Generation ##############
703
+ def process_generation(
704
+ _sd_type,
705
+ _model_type,
706
+ _upload_images,
707
+ _num_steps,
708
+ style_name,
709
+ _Ip_Adapter_Strength,
710
+ _style_strength_ratio,
711
+ guidance_scale,
712
+ seed_,
713
+ sa32_,
714
+ sa64_,
715
+ id_length_,
716
+ general_prompt,
717
+ negative_prompt,
718
+ prompt_array,
719
+ G_height,
720
+ G_width,
721
+ _comic_type,
722
+ font_choice,
723
+ _char_files,
724
+ ): # Corrected font_choice usage
725
+ if len(general_prompt.splitlines()) >= 3:
726
+ raise gr.Error(
727
+ "Support for more than three characters is temporarily unavailable due to VRAM limitations, but this issue will be resolved soon."
728
+ )
729
+ _model_type = "Photomaker" if _model_type == "Using Ref Images" else "original"
730
+ if _model_type == "Photomaker" and "img" not in general_prompt:
731
+ raise gr.Error(
732
+ 'Please add the triger word " img " behind the class word you want to customize, such as: man img or woman img'
733
+ )
734
+ if _upload_images is None and _model_type != "original":
735
+ raise gr.Error(f"Cannot find any input face image!")
736
+ global sa32, sa64, id_length, total_length, attn_procs, unet, cur_model_type
737
+ global write
738
+ global cur_step, attn_count
739
+ global height, width
740
+ height = G_height
741
+ width = G_width
742
+ global pipe
743
+ global sd_model_path, models_dict
744
+ sd_model_path = models_dict[_sd_type]
745
+ use_safe_tensor = True
746
+ for attn_processor in pipe.unet.attn_processors.values():
747
+ if isinstance(attn_processor, SpatialAttnProcessor2_0):
748
+ for values in attn_processor.id_bank.values():
749
+ del values
750
+ attn_processor.id_bank = {}
751
+ attn_processor.id_length = id_length
752
+ attn_processor.total_length = id_length + 1
753
+ gc.collect()
754
+ if cur_model_type != _sd_type + "-" + _model_type:
755
+ # apply the style template
756
+ ##### load pipe
757
+ del pipe
758
+ gc.collect()
759
+ if device == "cuda":
760
+ torch.cuda.empty_cache()
761
+ model_info = models_dict[_sd_type]
762
+ model_info["model_type"] = _model_type
763
+ pipe = load_models(model_info, device=device, photomaker_path=photomaker_path)
764
+ set_attention_processor(pipe.unet, id_length_, is_ipadapter=False)
765
+ ##### ########################
766
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
767
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
768
+ cur_model_type = _sd_type + "-" + _model_type
769
+ pipe.enable_vae_slicing()
770
+ if device != "mps":
771
+ pipe.enable_model_cpu_offload()
772
+ else:
773
+ unet = pipe.unet
774
+ # unet.set_attn_processor(copy.deepcopy(attn_procs))
775
+
776
+ load_chars = load_character_files_on_running(unet, character_files=_char_files)
777
+
778
+ prompts = prompt_array.splitlines()
779
+ global character_dict, character_index_dict, invert_character_index_dict, ref_indexs_dict, ref_totals
780
+ character_dict, character_list = character_to_dict(general_prompt)
781
+
782
+ start_merge_step = int(float(_style_strength_ratio) / 100 * _num_steps)
783
+ if start_merge_step > 30:
784
+ start_merge_step = 30
785
+ print(f"start_merge_step:{start_merge_step}")
786
+ generator = torch.Generator(device=device).manual_seed(seed_)
787
+ sa32, sa64 = sa32_, sa64_
788
+ id_length = id_length_
789
+ clipped_prompts = prompts[:]
790
+ nc_indexs = []
791
+ for ind, prompt in enumerate(clipped_prompts):
792
+ if "[NC]" in prompt:
793
+ nc_indexs.append(ind)
794
+ if ind < id_length:
795
+ raise gr.Error(
796
+ f"The first {id_length} row is id prompts, cannot use [NC]!"
797
+ )
798
+ prompts = [
799
+ prompt if "[NC]" not in prompt else prompt.replace("[NC]", "")
800
+ for prompt in clipped_prompts
801
+ ]
802
+
803
+ prompts = [
804
+ prompt.rpartition("#")[0] if "#" in prompt else prompt for prompt in prompts
805
+ ]
806
+ print(prompts)
807
+ # id_prompts = prompts[:id_length]
808
+ (
809
+ character_index_dict,
810
+ invert_character_index_dict,
811
+ replace_prompts,
812
+ ref_indexs_dict,
813
+ ref_totals,
814
+ ) = process_original_prompt(character_dict, prompts.copy(), id_length)
815
+ if _model_type != "original":
816
+ input_id_images_dict = {}
817
+ if len(_upload_images) != len(character_dict.keys()):
818
+ raise gr.Error(
819
+ f"You upload images({len(_upload_images)}) is not equal to the number of characters({len(character_dict.keys())})!"
820
+ )
821
+ for ind, img in enumerate(_upload_images):
822
+ input_id_images_dict[character_list[ind]] = [load_image(img)]
823
+ print(character_dict)
824
+ print(character_index_dict)
825
+ print(invert_character_index_dict)
826
+ # real_prompts = prompts[id_length:]
827
+ if device == "cuda":
828
+ torch.cuda.empty_cache()
829
+ write = True
830
+ cur_step = 0
831
+
832
+ attn_count = 0
833
+ # id_prompts, negative_prompt = apply_style(style_name, id_prompts, negative_prompt)
834
+ # print(id_prompts)
835
+ setup_seed(seed_)
836
+ total_results = []
837
+ id_images = []
838
+ results_dict = {}
839
+ global cur_character
840
+ if not load_chars:
841
+ for character_key in character_dict.keys():
842
+ cur_character = [character_key]
843
+ ref_indexs = ref_indexs_dict[character_key]
844
+ print(character_key, ref_indexs)
845
+ current_prompts = [replace_prompts[ref_ind] for ref_ind in ref_indexs]
846
+ print(current_prompts)
847
+ setup_seed(seed_)
848
+ generator = torch.Generator(device=device).manual_seed(seed_)
849
+ cur_step = 0
850
+ cur_positive_prompts, negative_prompt = apply_style(
851
+ style_name, current_prompts, negative_prompt
852
+ )
853
+ if _model_type == "original":
854
+ id_images = pipe(
855
+ cur_positive_prompts,
856
+ num_inference_steps=_num_steps,
857
+ guidance_scale=guidance_scale,
858
+ height=height,
859
+ width=width,
860
+ negative_prompt=negative_prompt,
861
+ generator=generator,
862
+ ).images
863
+ elif _model_type == "Photomaker":
864
+ id_images = pipe(
865
+ cur_positive_prompts,
866
+ input_id_images=input_id_images_dict[character_key],
867
+ num_inference_steps=_num_steps,
868
+ guidance_scale=guidance_scale,
869
+ start_merge_step=start_merge_step,
870
+ height=height,
871
+ width=width,
872
+ negative_prompt=negative_prompt,
873
+ generator=generator,
874
+ ).images
875
+ else:
876
+ raise NotImplementedError(
877
+ "You should choice between original and Photomaker!",
878
+ f"But you choice {_model_type}",
879
+ )
880
+
881
+ # total_results = id_images + total_results
882
+ # yield total_results
883
+ print(id_images)
884
+ for ind, img in enumerate(id_images):
885
+ print(ref_indexs[ind])
886
+ results_dict[ref_indexs[ind]] = img
887
+ # real_images = []
888
+ yield [results_dict[ind] for ind in results_dict.keys()]
889
+ write = False
890
+ if not load_chars:
891
+ real_prompts_inds = [
892
+ ind for ind in range(len(prompts)) if ind not in ref_totals
893
+ ]
894
+ else:
895
+ real_prompts_inds = [ind for ind in range(len(prompts))]
896
+ print(real_prompts_inds)
897
+
898
+ for real_prompts_ind in real_prompts_inds:
899
+ real_prompt = replace_prompts[real_prompts_ind]
900
+ cur_character = get_ref_character(prompts[real_prompts_ind], character_dict)
901
+ print(cur_character, real_prompt)
902
+ setup_seed(seed_)
903
+ if len(cur_character) > 1 and _model_type == "Photomaker":
904
+ raise gr.Error(
905
+ "Temporarily Not Support Multiple character in Ref Image Mode!"
906
+ )
907
+ generator = torch.Generator(device=device).manual_seed(seed_)
908
+ cur_step = 0
909
+ real_prompt = apply_style_positive(style_name, real_prompt)
910
+ if _model_type == "original":
911
+ results_dict[real_prompts_ind] = pipe(
912
+ real_prompt,
913
+ num_inference_steps=_num_steps,
914
+ guidance_scale=guidance_scale,
915
+ height=height,
916
+ width=width,
917
+ negative_prompt=negative_prompt,
918
+ generator=generator,
919
+ ).images[0]
920
+ elif _model_type == "Photomaker":
921
+ results_dict[real_prompts_ind] = pipe(
922
+ real_prompt,
923
+ input_id_images=(
924
+ input_id_images_dict[cur_character[0]]
925
+ if real_prompts_ind not in nc_indexs
926
+ else input_id_images_dict[character_list[0]]
927
+ ),
928
+ num_inference_steps=_num_steps,
929
+ guidance_scale=guidance_scale,
930
+ start_merge_step=start_merge_step,
931
+ height=height,
932
+ width=width,
933
+ negative_prompt=negative_prompt,
934
+ generator=generator,
935
+ nc_flag=True if real_prompts_ind in nc_indexs else False,
936
+ ).images[0]
937
+ else:
938
+ raise NotImplementedError(
939
+ "You should choice between original and Photomaker!",
940
+ f"But you choice {_model_type}",
941
+ )
942
+ yield [results_dict[ind] for ind in results_dict.keys()]
943
+ total_results = [results_dict[ind] for ind in range(len(prompts))]
944
+ if _comic_type != "No typesetting (default)":
945
+ captions = prompt_array.splitlines()
946
+ captions = [caption.replace("[NC]", "") for caption in captions]
947
+ captions = [
948
+ caption.split("#")[-1] if "#" in caption else caption
949
+ for caption in captions
950
+ ]
951
+ font_path = os.path.join("fonts", font_choice)
952
+ font = ImageFont.truetype(font_path, int(45))
953
+ total_results = (
954
+ get_comic(total_results, _comic_type, captions=captions, font=font)
955
+ + total_results
956
+ )
957
+ save_results(pipe.unet, total_results)
958
+
959
+ yield total_results
960
+
961
+
962
+ def array2string(arr):
963
+ stringtmp = ""
964
+ for i, part in enumerate(arr):
965
+ if i != len(arr) - 1:
966
+ stringtmp += part + "\n"
967
+ else:
968
+ stringtmp += part
969
+
970
+ return stringtmp
971
+
972
+
973
+ #################################################
974
+ #################################################
975
+ ### define the interface
976
+
977
+ with gr.Blocks(css=css) as demo:
978
+ binary_matrixes = gr.State([])
979
+ color_layout = gr.State([])
980
+
981
+ # gr.Markdown(logo)
982
+ gr.Markdown(title)
983
+ gr.Markdown(description)
984
+
985
+ with gr.Row():
986
+ with gr.Group(elem_id="main-image"):
987
+
988
+ prompts = []
989
+ colors = []
990
+
991
+ with gr.Column(visible=True) as gen_prompt_vis:
992
+ sd_type = gr.Dropdown(
993
+ choices=list(models_dict.keys()),
994
+ value="Unstable",
995
+ label="sd_type",
996
+ info="Select pretrained model",
997
+ )
998
+ model_type = gr.Radio(
999
+ ["Only Using Textual Description", "Using Ref Images"],
1000
+ label="model_type",
1001
+ value="Only Using Textual Description",
1002
+ info="Control type of the Character",
1003
+ )
1004
+ with gr.Group(visible=False) as control_image_input:
1005
+ files = gr.Files(
1006
+ label="Drag (Select) 1 or more photos of your face",
1007
+ file_types=["image"],
1008
+ )
1009
+ uploaded_files = gr.Gallery(
1010
+ label="Your images",
1011
+ visible=False,
1012
+ columns=5,
1013
+ rows=1,
1014
+ height=200,
1015
+ )
1016
+ with gr.Column(visible=False) as clear_button:
1017
+ remove_and_reupload = gr.ClearButton(
1018
+ value="Remove and upload new ones",
1019
+ components=files,
1020
+ size="sm",
1021
+ )
1022
+ general_prompt = gr.Textbox(
1023
+ value="",
1024
+ lines=2,
1025
+ label="(1) Textual Description for Character",
1026
+ interactive=True,
1027
+ )
1028
+ negative_prompt = gr.Textbox(
1029
+ value="", label="(2) Negative_prompt", interactive=True
1030
+ )
1031
+ style = gr.Dropdown(
1032
+ label="Style template",
1033
+ choices=STYLE_NAMES,
1034
+ value=DEFAULT_STYLE_NAME,
1035
+ )
1036
+ prompt_array = gr.Textbox(
1037
+ lines=3,
1038
+ value="",
1039
+ label="(3) Comic Description (each line corresponds to a frame).",
1040
+ interactive=True,
1041
+ )
1042
+ char_path = gr.Textbox(
1043
+ lines=2,
1044
+ value="",
1045
+ visible=False,
1046
+ label="(Optional) Character files",
1047
+ interactive=True,
1048
+ )
1049
+ char_btn = gr.Button("Load Character files", visible=False)
1050
+ with gr.Accordion("(4) Tune the hyperparameters", open=True):
1051
+ font_choice = gr.Dropdown(
1052
+ label="Select Font",
1053
+ choices=[
1054
+ f for f in os.listdir("./fonts") if f.endswith(".ttf")
1055
+ ],
1056
+ value="Inkfree.ttf",
1057
+ info="Select font for the final slide.",
1058
+ interactive=True,
1059
+ )
1060
+ sa32_ = gr.Slider(
1061
+ label=" (The degree of Paired Attention at 32 x 32 self-attention layers) ",
1062
+ minimum=0,
1063
+ maximum=1.0,
1064
+ value=0.5,
1065
+ step=0.1,
1066
+ )
1067
+ sa64_ = gr.Slider(
1068
+ label=" (The degree of Paired Attention at 64 x 64 self-attention layers) ",
1069
+ minimum=0,
1070
+ maximum=1.0,
1071
+ value=0.5,
1072
+ step=0.1,
1073
+ )
1074
+ id_length_ = gr.Slider(
1075
+ label="Number of id images in total images",
1076
+ minimum=1,
1077
+ maximum=4,
1078
+ value=1,
1079
+ step=1,
1080
+ )
1081
+ with gr.Row():
1082
+ seed_ = gr.Slider(
1083
+ label="Seed", minimum=-1, maximum=MAX_SEED, value=0, step=1
1084
+ )
1085
+ randomize_seed_btn = gr.Button("🎲", size="sm")
1086
+ num_steps = gr.Slider(
1087
+ label="Number of sample steps",
1088
+ minimum=20,
1089
+ maximum=100,
1090
+ step=1,
1091
+ value=20,
1092
+ )
1093
+ G_height = gr.Slider(
1094
+ label="height",
1095
+ minimum=256,
1096
+ maximum=1024,
1097
+ step=32,
1098
+ value=768,
1099
+ )
1100
+ G_width = gr.Slider(
1101
+ label="width",
1102
+ minimum=256,
1103
+ maximum=1024,
1104
+ step=32,
1105
+ value=768,
1106
+ )
1107
+ comic_type = gr.Radio(
1108
+ [
1109
+ "No typesetting (default)",
1110
+ "Four Pannel",
1111
+ "Classic Comic Style",
1112
+ ],
1113
+ value="Classic Comic Style",
1114
+ label="Typesetting Style",
1115
+ info="Select the typesetting style ",
1116
+ )
1117
+ guidance_scale = gr.Slider(
1118
+ label="Guidance scale",
1119
+ minimum=0.1,
1120
+ maximum=10.0,
1121
+ step=0.1,
1122
+ value=5,
1123
+ )
1124
+ style_strength_ratio = gr.Slider(
1125
+ label="Style strength of Ref Image (%)",
1126
+ minimum=15,
1127
+ maximum=50,
1128
+ step=1,
1129
+ value=20,
1130
+ visible=False,
1131
+ )
1132
+ Ip_Adapter_Strength = gr.Slider(
1133
+ label="Ip_Adapter_Strength",
1134
+ minimum=0,
1135
+ maximum=1,
1136
+ step=0.1,
1137
+ value=0.5,
1138
+ visible=False,
1139
+ )
1140
+ final_run_btn = gr.Button("Generate ! 😺")
1141
+
1142
+ with gr.Column():
1143
+ out_image = gr.Gallery(label="Result", columns=2, height="auto")
1144
+ generated_information = gr.Markdown(
1145
+ label="Generation Details", value="", visible=False
1146
+ )
1147
+ gr.Markdown(version)
1148
+ model_type.change(
1149
+ fn=change_visiale_by_model_type,
1150
+ inputs=model_type,
1151
+ outputs=[control_image_input, style_strength_ratio, Ip_Adapter_Strength],
1152
+ )
1153
+ files.upload(
1154
+ fn=swap_to_gallery, inputs=files, outputs=[uploaded_files, clear_button, files]
1155
+ )
1156
+ remove_and_reupload.click(
1157
+ fn=remove_back_to_files, outputs=[uploaded_files, clear_button, files]
1158
+ )
1159
+ char_btn.click(fn=load_character_files, inputs=char_path, outputs=[general_prompt])
1160
+
1161
+ randomize_seed_btn.click(
1162
+ fn=lambda: random.randint(-1, MAX_SEED),
1163
+ inputs=[],
1164
+ outputs=seed_,
1165
+ )
1166
+
1167
+ final_run_btn.click(fn=set_text_unfinished, outputs=generated_information).then(
1168
+ process_generation,
1169
+ inputs=[
1170
+ sd_type,
1171
+ model_type,
1172
+ files,
1173
+ num_steps,
1174
+ style,
1175
+ Ip_Adapter_Strength,
1176
+ style_strength_ratio,
1177
+ guidance_scale,
1178
+ seed_,
1179
+ sa32_,
1180
+ sa64_,
1181
+ id_length_,
1182
+ general_prompt,
1183
+ negative_prompt,
1184
+ prompt_array,
1185
+ G_height,
1186
+ G_width,
1187
+ comic_type,
1188
+ font_choice,
1189
+ char_path,
1190
+ ],
1191
+ outputs=out_image,
1192
+ ).then(fn=set_text_finished, outputs=generated_information)
1193
+
1194
+ gr.Examples(
1195
+ examples=[
1196
+ [
1197
+ 0,
1198
+ 0.5,
1199
+ 0.5,
1200
+ 2,
1201
+ "[Bob] A man, wearing a black suit\n[Alice]a woman, wearing a white shirt",
1202
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
1203
+ array2string(
1204
+ [
1205
+ "[Bob] at home, read new paper #at home, The newspaper says there is a treasure house in the forest.",
1206
+ "[Bob] on the road, near the forest",
1207
+ "[Alice] is make a call at home # [Bob] invited [Alice] to join him on an adventure.",
1208
+ "[NC]A tiger appeared in the forest, at night ",
1209
+ "[NC] The car on the road, near the forest #They drives to the forest in search of treasure.",
1210
+ "[Bob] very frightened, open mouth, in the forest, at night",
1211
+ "[Alice] very frightened, open mouth, in the forest, at night",
1212
+ "[Bob] and [Alice] running very fast, in the forest, at night",
1213
+ "[NC] A house in the forest, at night #Suddenly, They discovers the treasure house!",
1214
+ "[Bob] and [Alice] in the house filled with treasure, laughing, at night #He is overjoyed inside the house.",
1215
+ ]
1216
+ ),
1217
+ "Comic book",
1218
+ "Only Using Textual Description",
1219
+ get_image_path_list("./examples/taylor"),
1220
+ 768,
1221
+ 768,
1222
+ ],
1223
+ [
1224
+ 0,
1225
+ 0.5,
1226
+ 0.5,
1227
+ 2,
1228
+ "[Bob] A man img, wearing a black suit\n[Alice]a woman img, wearing a white shirt",
1229
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
1230
+ array2string(
1231
+ [
1232
+ "[Bob] at home, read new paper #at home, The newspaper says there is a treasure house in the forest.",
1233
+ "[Bob] on the road, near the forest",
1234
+ "[Alice] is make a call at home # [Bob] invited [Alice] to join him on an adventure.",
1235
+ "[NC] The car on the road, near the forest #They drives to the forest in search of treasure.",
1236
+ "[NC]A tiger appeared in the forest, at night ",
1237
+ "[Bob] very frightened, open mouth, in the forest, at night",
1238
+ "[Alice] very frightened, open mouth, in the forest, at night",
1239
+ "[Bob] running very fast, in the forest, at night",
1240
+ "[NC] A house in the forest, at night #Suddenly, They discovers the treasure house!",
1241
+ "[Bob] in the house filled with treasure, laughing, at night #They are overjoyed inside the house.",
1242
+ ]
1243
+ ),
1244
+ "Comic book",
1245
+ "Using Ref Images",
1246
+ get_image_path_list("./examples/twoperson"),
1247
+ 1024,
1248
+ 1024,
1249
+ ],
1250
+ [
1251
+ 1,
1252
+ 0.5,
1253
+ 0.5,
1254
+ 3,
1255
+ "[Taylor]a woman img, wearing a white T-shirt, blue loose hair",
1256
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
1257
+ array2string(
1258
+ [
1259
+ "[Taylor]wake up in the bed",
1260
+ "[Taylor]have breakfast",
1261
+ "[Taylor]is on the road, go to company",
1262
+ "[Taylor]work in the company",
1263
+ "[Taylor]Take a walk next to the company at noon",
1264
+ "[Taylor]lying in bed at night",
1265
+ ]
1266
+ ),
1267
+ "Japanese Anime",
1268
+ "Using Ref Images",
1269
+ get_image_path_list("./examples/taylor"),
1270
+ 768,
1271
+ 768,
1272
+ ],
1273
+ [
1274
+ 0,
1275
+ 0.5,
1276
+ 0.5,
1277
+ 3,
1278
+ "[Bob]a man, wearing black jacket",
1279
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
1280
+ array2string(
1281
+ [
1282
+ "[Bob]wake up in the bed",
1283
+ "[Bob]have breakfast",
1284
+ "[Bob]is on the road, go to the company, close look",
1285
+ "[Bob]work in the company",
1286
+ "[Bob]laughing happily",
1287
+ "[Bob]lying in bed at night",
1288
+ ]
1289
+ ),
1290
+ "Japanese Anime",
1291
+ "Only Using Textual Description",
1292
+ get_image_path_list("./examples/taylor"),
1293
+ 768,
1294
+ 768,
1295
+ ],
1296
+ [
1297
+ 0,
1298
+ 0.3,
1299
+ 0.5,
1300
+ 3,
1301
+ "[Kitty]a girl, wearing white shirt, black skirt, black tie, yellow hair",
1302
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
1303
+ array2string(
1304
+ [
1305
+ "[Kitty]at home #at home, began to go to drawing",
1306
+ "[Kitty]sitting alone on a park bench.",
1307
+ "[Kitty]reading a book on a park bench.",
1308
+ "[NC]A squirrel approaches, peeking over the bench. ",
1309
+ "[Kitty]look around in the park. # She looks around and enjoys the beauty of nature.",
1310
+ "[NC]leaf falls from the tree, landing on the sketchbook.",
1311
+ "[Kitty]picks up the leaf, examining its details closely.",
1312
+ "[NC]The brown squirrel appear.",
1313
+ "[Kitty]is very happy # She is very happy to see the squirrel again",
1314
+ "[NC]The brown squirrel takes the cracker and scampers up a tree. # She gives the squirrel cracker",
1315
+ ]
1316
+ ),
1317
+ "Japanese Anime",
1318
+ "Only Using Textual Description",
1319
+ get_image_path_list("./examples/taylor"),
1320
+ 768,
1321
+ 768,
1322
+ ],
1323
+ ],
1324
+ inputs=[
1325
+ seed_,
1326
+ sa32_,
1327
+ sa64_,
1328
+ id_length_,
1329
+ general_prompt,
1330
+ negative_prompt,
1331
+ prompt_array,
1332
+ style,
1333
+ model_type,
1334
+ files,
1335
+ G_height,
1336
+ G_width,
1337
+ ],
1338
+ # outputs=[post_sketch, binary_matrixes, *color_row, *colors, *prompts, gen_prompt_vis, general_prompt, seed_],
1339
+ # run_on_click=True,
1340
+ label="😺 Examples 😺",
1341
+ )
1342
+ gr.Markdown(article)
1343
+
1344
+
1345
+ demo.launch(server_name="0.0.0.0", share=True)
images/logo.png ADDED
images/pad_images.png ADDED
oldversion/gradio_app_sdxl_specific_id_mps.py ADDED
@@ -0,0 +1,767 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from email.policy import default
2
+ from this import d
3
+ import gradio as gr
4
+ import numpy as np
5
+ import torch
6
+ import gc
7
+ from huggingface_hub import hf_hub_download
8
+ import requests
9
+ import random
10
+ import os
11
+ import sys
12
+ import pickle
13
+ from PIL import Image
14
+ from tqdm.auto import tqdm
15
+ from datetime import datetime
16
+ from utils.gradio_utils import is_torch2_available
17
+ if is_torch2_available():
18
+ from utils.gradio_utils import \
19
+ AttnProcessor2_0 as AttnProcessor
20
+ else:
21
+ from utils.gradio_utils import AttnProcessor
22
+
23
+ import diffusers
24
+ from diffusers import StableDiffusionXLPipeline
25
+ from utils import PhotoMakerStableDiffusionXLPipeline
26
+ from diffusers import DDIMScheduler
27
+ import torch.nn.functional as F
28
+ from utils.gradio_utils import cal_attn_mask_xl
29
+ import copy
30
+ import os
31
+ from diffusers.utils import load_image
32
+ from utils.utils import get_comic
33
+ from utils.style_template import styles
34
+ import torch.nn.functional as F
35
+ image_encoder_path = "./data/models/ip_adapter/sdxl_models/image_encoder"
36
+ ip_ckpt = "./data/models/ip_adapter/sdxl_models/ip-adapter_sdxl_vit-h.bin"
37
+ os.environ["no_proxy"] = "localhost,127.0.0.1,::1"
38
+ STYLE_NAMES = list(styles.keys())
39
+ DEFAULT_STYLE_NAME = "Japanese Anime"
40
+ global models_dict
41
+ use_va = False
42
+ models_dict = {
43
+ # "Juggernaut": "RunDiffusion/Juggernaut-XL-v8",
44
+ "RealVision": "SG161222/RealVisXL_V4.0" ,
45
+ "SDXL": "stabilityai/stable-diffusion-xl-base-1.0" ,
46
+ "Unstable": "stablediffusionapi/sdxl-unstable-diffusers-y"
47
+ }
48
+ photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
49
+ MAX_SEED = np.iinfo(np.int32).max
50
+ def setup_seed(seed):
51
+ torch.manual_seed(seed)
52
+ #torch.cuda.manual_seed_all(seed)
53
+ np.random.seed(seed)
54
+ random.seed(seed)
55
+ torch.backends.cudnn.deterministic = True
56
+ def set_text_unfinished():
57
+ return gr.update(visible=True, value="<h3>(Not Finished) Generating ··· The intermediate results will be shown.</h3>")
58
+ def set_text_finished():
59
+ return gr.update(visible=True, value="<h3>Generation Finished</h3>")
60
+ #################################################
61
+ def get_image_path_list(folder_name):
62
+ image_basename_list = os.listdir(folder_name)
63
+ image_path_list = sorted([os.path.join(folder_name, basename) for basename in image_basename_list])
64
+ return image_path_list
65
+
66
+ #################################################
67
+ class SpatialAttnProcessor2_0(torch.nn.Module):
68
+ r"""
69
+ Attention processor for IP-Adapater for PyTorch 2.0.
70
+ Args:
71
+ hidden_size (`int`):
72
+ The hidden size of the attention layer.
73
+ cross_attention_dim (`int`):
74
+ The number of channels in the `encoder_hidden_states`.
75
+ text_context_len (`int`, defaults to 77):
76
+ The context length of the text features.
77
+ scale (`float`, defaults to 1.0):
78
+ the weight scale of image prompt.
79
+ """
80
+
81
+ def __init__(self, hidden_size=None, cross_attention_dim=None, id_length=4, device="mps", dtype=torch.float32):
82
+ super().__init__()
83
+ if not hasattr(F, "scaled_dot_product_attention"):
84
+ raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
85
+ self.device = device
86
+ self.dtype = dtype
87
+ self.hidden_size = hidden_size
88
+ self.cross_attention_dim = cross_attention_dim
89
+ self.total_length = id_length + 1
90
+ self.id_length = id_length
91
+ self.id_bank = {}
92
+
93
+ def __call__(
94
+ self,
95
+ attn,
96
+ hidden_states,
97
+ encoder_hidden_states=None,
98
+ attention_mask=None,
99
+ temb=None):
100
+ # un_cond_hidden_states, cond_hidden_states = hidden_states.chunk(2)
101
+ # un_cond_hidden_states = self.__call2__(attn, un_cond_hidden_states,encoder_hidden_states,attention_mask,temb)
102
+ # 生成一个0到1之间的随机数
103
+ global total_count,attn_count,cur_step,mask1024,mask4096
104
+ global sa32, sa64
105
+ global write
106
+ global height,width
107
+ if write:
108
+ # print(f"white:{cur_step}")
109
+ self.id_bank[cur_step] = [hidden_states[:self.id_length].clone(), hidden_states[self.id_length:].clone()]
110
+ else:
111
+ encoder_hidden_states = torch.cat((self.id_bank[cur_step][0].to(self.device),hidden_states[:1],self.id_bank[cur_step][1].to(self.device),hidden_states[1:]))
112
+ # 判断随机数是否大于0.5
113
+ if cur_step <1:
114
+ hidden_states = self.__call2__(attn, hidden_states,None,attention_mask,temb)
115
+ else: # 256 1024 4096
116
+ random_number = random.random()
117
+ if cur_step <20:
118
+ rand_num = 0.3
119
+ else:
120
+ rand_num = 0.1
121
+ # print(f"hidden state shape {hidden_states.shape[1]}")
122
+ if random_number > rand_num:
123
+ # print("mask shape",mask1024.shape,mask4096.shape)
124
+ if not write:
125
+ if hidden_states.shape[1] == (height//32) * (width//32):
126
+ attention_mask = mask1024[mask1024.shape[0] // self.total_length * self.id_length:]
127
+ else:
128
+ attention_mask = mask4096[mask4096.shape[0] // self.total_length * self.id_length:]
129
+ else:
130
+ # print(self.total_length,self.id_length,hidden_states.shape,(height//32) * (width//32))
131
+ if hidden_states.shape[1] == (height//32) * (width//32):
132
+ attention_mask = mask1024[:mask1024.shape[0] // self.total_length * self.id_length,:mask1024.shape[0] // self.total_length * self.id_length]
133
+ else:
134
+ attention_mask = mask4096[:mask4096.shape[0] // self.total_length * self.id_length,:mask4096.shape[0] // self.total_length * self.id_length]
135
+ # print(attention_mask.shape)
136
+ # print("before attention",hidden_states.shape,attention_mask.shape,encoder_hidden_states.shape if encoder_hidden_states is not None else "None")
137
+ hidden_states = self.__call1__(attn, hidden_states,encoder_hidden_states,attention_mask,temb)
138
+ else:
139
+ hidden_states = self.__call2__(attn, hidden_states,None,attention_mask,temb)
140
+ attn_count +=1
141
+ if attn_count == total_count:
142
+ attn_count = 0
143
+ cur_step += 1
144
+ mask1024,mask4096 = cal_attn_mask_xl(self.total_length,self.id_length,sa32,sa64,height,width, device=self.device, dtype= self.dtype)
145
+
146
+ return hidden_states
147
+ def __call1__(
148
+ self,
149
+ attn,
150
+ hidden_states,
151
+ encoder_hidden_states=None,
152
+ attention_mask=None,
153
+ temb=None,
154
+ ):
155
+ # print("hidden state shape",hidden_states.shape,self.id_length)
156
+ residual = hidden_states
157
+ # if encoder_hidden_states is not None:
158
+ # raise Exception("not implement")
159
+ if attn.spatial_norm is not None:
160
+ hidden_states = attn.spatial_norm(hidden_states, temb)
161
+ input_ndim = hidden_states.ndim
162
+
163
+ if input_ndim == 4:
164
+ total_batch_size, channel, height, width = hidden_states.shape
165
+ hidden_states = hidden_states.view(total_batch_size, channel, height * width).transpose(1, 2)
166
+ total_batch_size,nums_token,channel = hidden_states.shape
167
+ img_nums = total_batch_size//2
168
+ hidden_states = hidden_states.view(-1,img_nums,nums_token,channel).reshape(-1,img_nums * nums_token,channel)
169
+
170
+ batch_size, sequence_length, _ = hidden_states.shape
171
+
172
+ if attn.group_norm is not None:
173
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
174
+
175
+ query = attn.to_q(hidden_states)
176
+
177
+ if encoder_hidden_states is None:
178
+ encoder_hidden_states = hidden_states # B, N, C
179
+ else:
180
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,nums_token,channel).reshape(-1,(self.id_length+1) * nums_token,channel)
181
+
182
+ key = attn.to_k(encoder_hidden_states)
183
+ value = attn.to_v(encoder_hidden_states)
184
+
185
+
186
+ inner_dim = key.shape[-1]
187
+ head_dim = inner_dim // attn.heads
188
+
189
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
190
+
191
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
192
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
193
+ # print(key.shape,value.shape,query.shape,attention_mask.shape)
194
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
195
+ # TODO: add support for attn.scale when we move to Torch 2.1
196
+ #print(query.shape,key.shape,value.shape,attention_mask.shape)
197
+ hidden_states = F.scaled_dot_product_attention(
198
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
199
+ )
200
+
201
+ hidden_states = hidden_states.transpose(1, 2).reshape(total_batch_size, -1, attn.heads * head_dim)
202
+ hidden_states = hidden_states.to(query.dtype)
203
+
204
+
205
+
206
+ # linear proj
207
+ hidden_states = attn.to_out[0](hidden_states)
208
+ # dropout
209
+ hidden_states = attn.to_out[1](hidden_states)
210
+
211
+ # if input_ndim == 4:
212
+ # tile_hidden_states = tile_hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
213
+
214
+ # if attn.residual_connection:
215
+ # tile_hidden_states = tile_hidden_states + residual
216
+
217
+ if input_ndim == 4:
218
+ hidden_states = hidden_states.transpose(-1, -2).reshape(total_batch_size, channel, height, width)
219
+ if attn.residual_connection:
220
+ hidden_states = hidden_states + residual
221
+ hidden_states = hidden_states / attn.rescale_output_factor
222
+ # print(hidden_states.shape)
223
+ return hidden_states
224
+ def __call2__(
225
+ self,
226
+ attn,
227
+ hidden_states,
228
+ encoder_hidden_states=None,
229
+ attention_mask=None,
230
+ temb=None):
231
+ residual = hidden_states
232
+
233
+ if attn.spatial_norm is not None:
234
+ hidden_states = attn.spatial_norm(hidden_states, temb)
235
+
236
+ input_ndim = hidden_states.ndim
237
+
238
+ if input_ndim == 4:
239
+ batch_size, channel, height, width = hidden_states.shape
240
+ hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
241
+
242
+ batch_size, sequence_length, channel = (
243
+ hidden_states.shape
244
+ )
245
+ # print(hidden_states.shape)
246
+ if attention_mask is not None:
247
+ attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
248
+ # scaled_dot_product_attention expects attention_mask shape to be
249
+ # (batch, heads, source_length, target_length)
250
+ attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
251
+
252
+ if attn.group_norm is not None:
253
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
254
+
255
+ query = attn.to_q(hidden_states)
256
+
257
+ if encoder_hidden_states is None:
258
+ encoder_hidden_states = hidden_states # B, N, C
259
+ else:
260
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,sequence_length,channel).reshape(-1,(self.id_length+1) * sequence_length,channel)
261
+
262
+ key = attn.to_k(encoder_hidden_states)
263
+ value = attn.to_v(encoder_hidden_states)
264
+
265
+ inner_dim = key.shape[-1]
266
+ head_dim = inner_dim // attn.heads
267
+
268
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
269
+
270
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
271
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
272
+
273
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
274
+ # TODO: add support for attn.scale when we move to Torch 2.1
275
+ hidden_states = F.scaled_dot_product_attention(
276
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
277
+ )
278
+
279
+ hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
280
+ hidden_states = hidden_states.to(query.dtype)
281
+
282
+ # linear proj
283
+ hidden_states = attn.to_out[0](hidden_states)
284
+ # dropout
285
+ hidden_states = attn.to_out[1](hidden_states)
286
+
287
+ if input_ndim == 4:
288
+ hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
289
+
290
+ if attn.residual_connection:
291
+ hidden_states = hidden_states + residual
292
+
293
+ hidden_states = hidden_states / attn.rescale_output_factor
294
+
295
+ return hidden_states
296
+
297
+ def set_attention_processor(unet,id_length,is_ipadapter = False):
298
+ global attn_procs
299
+ attn_procs = {}
300
+ for name in unet.attn_processors.keys():
301
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
302
+ if name.startswith("mid_block"):
303
+ hidden_size = unet.config.block_out_channels[-1]
304
+ elif name.startswith("up_blocks"):
305
+ block_id = int(name[len("up_blocks.")])
306
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
307
+ elif name.startswith("down_blocks"):
308
+ block_id = int(name[len("down_blocks.")])
309
+ hidden_size = unet.config.block_out_channels[block_id]
310
+ if cross_attention_dim is None:
311
+ if name.startswith("up_blocks") :
312
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length = id_length)
313
+ else:
314
+ attn_procs[name] = AttnProcessor()
315
+ else:
316
+ if is_ipadapter:
317
+ attn_procs[name] = IPAttnProcessor2_0(
318
+ hidden_size=hidden_size,
319
+ cross_attention_dim=cross_attention_dim,
320
+ scale=1,
321
+ num_tokens=4,
322
+ ).to(unet.device, dtype=torch.float16)
323
+ else:
324
+ attn_procs[name] = AttnProcessor()
325
+
326
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
327
+ #################################################
328
+ #################################################
329
+ canvas_html = "<div id='canvas-root' style='max-width:400px; margin: 0 auto'></div>"
330
+ load_js = """
331
+ async () => {
332
+ const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/sketch-canvas.js"
333
+ fetch(url)
334
+ .then(res => res.text())
335
+ .then(text => {
336
+ const script = document.createElement('script');
337
+ script.type = "module"
338
+ script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
339
+ document.head.appendChild(script);
340
+ });
341
+ }
342
+ """
343
+
344
+ get_js_colors = """
345
+ async (canvasData) => {
346
+ const canvasEl = document.getElementById("canvas-root");
347
+ return [canvasEl._data]
348
+ }
349
+ """
350
+
351
+ css = '''
352
+ #color-bg{display:flex;justify-content: center;align-items: center;}
353
+ .color-bg-item{width: 100%; height: 32px}
354
+ #main_button{width:100%}
355
+ <style>
356
+ '''
357
+
358
+
359
+ #################################################
360
+ title = r"""
361
+ <h1 align="center">StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</h1>
362
+ """
363
+
364
+ description = r"""
365
+ <b>Official 🤗 Gradio demo</b> for <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'><b>StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</b></a>.<br>
366
+ ❗️❗️❗️[<b>Important</b>] Personalization steps:<br>
367
+ 1️⃣ Enter a Textual Description for Character, if you add the Ref-Image, making sure to <b>follow the class word</b> you want to customize with the <b>trigger word</b>: `img`, such as: `man img` or `woman img` or `girl img`.<br>
368
+ 2️⃣ Enter the prompt array, each line corrsponds to one generated image.<br>
369
+ 3️⃣ Choose your preferred style template.<br>
370
+ 4️⃣ Click the <b>Submit</b> button to start customizing.
371
+ """
372
+
373
+ article = r"""
374
+
375
+ If StoryDiffusion is helpful, please help to ⭐ the <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'>Github Repo</a>. Thanks!
376
+ [![GitHub Stars](https://img.shields.io/github/stars/HVision-NKU/StoryDiffusion?style=social)](https://github.com/HVision-NKU/StoryDiffusion)
377
+ ---
378
+ 📝 **Citation**
379
+ <br>
380
+ If our work is useful for your research, please consider citing:
381
+
382
+ ```bibtex
383
+ @article{Zhou2024storydiffusion,
384
+ title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
385
+ author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
386
+ year={2024}
387
+ }
388
+ ```
389
+ 📋 **License**
390
+ <br>
391
+ The Contents you create are under Apache-2.0 LICENSE. The Code are under Attribution-NonCommercial 4.0 International.
392
+
393
+ 📧 **Contact**
394
+ <br>
395
+ If you have any questions, please feel free to reach me out at <b>[email protected]</b>.
396
+ """
397
+ version = r"""
398
+ <h3 align="center">StoryDiffusion Version 0.01 (test version)</h3>
399
+
400
+ <h5 >1. Support image ref image. (Cartoon Ref image is not support now)</h5>
401
+ <h5 >2. Support Typesetting Style and Captioning.(By default, the prompt is used as the caption for each image. If you need to change the caption, add a # at the end of each line. Only the part after the # will be added as a caption to the image.)</h5>
402
+ <h5 >3. [NC]symbol (The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the "[NC]" at the beginning of the line. For example, to generate a scene of falling leaves without any character, write: "[NC] The leaves are falling.")</h5>
403
+ <h5 align="center">Tips: </h4>
404
+ """
405
+ #################################################
406
+ global attn_count, total_count, id_length, total_length,cur_step, cur_model_type
407
+ global write
408
+ global sa32, sa64
409
+ global height,width
410
+ attn_count = 0
411
+ total_count = 0
412
+ cur_step = 0
413
+ id_length = 4
414
+ total_length = 5
415
+ cur_model_type = ""
416
+ device="mps"
417
+ global attn_procs,unet
418
+ attn_procs = {}
419
+ ###
420
+ write = False
421
+ ###
422
+ sa32 = 0.5
423
+ sa64 = 0.5
424
+ height = 768
425
+ width = 768
426
+ ###
427
+ global pipe
428
+ global sd_model_path
429
+ pipe = None
430
+ sd_model_path = models_dict["RealVision"]#"SG161222/RealVisXL_V4.0"
431
+ ### LOAD Stable Diffusion Pipeline
432
+ pipe = StableDiffusionXLPipeline.from_pretrained(sd_model_path, torch_dtype=torch.float16, use_safetensors = True)
433
+ pipe = pipe.to(device)
434
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
435
+ # pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
436
+ pipe.scheduler.set_timesteps(50)
437
+ unet = pipe.unet
438
+ ### Insert PairedAttention
439
+ for name in unet.attn_processors.keys():
440
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
441
+ if name.startswith("mid_block"):
442
+ hidden_size = unet.config.block_out_channels[-1]
443
+ elif name.startswith("up_blocks"):
444
+ block_id = int(name[len("up_blocks.")])
445
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
446
+ elif name.startswith("down_blocks"):
447
+ block_id = int(name[len("down_blocks.")])
448
+ hidden_size = unet.config.block_out_channels[block_id]
449
+ if cross_attention_dim is None and (name.startswith("up_blocks") ) :
450
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length = id_length)
451
+ total_count +=1
452
+ else:
453
+ attn_procs[name] = AttnProcessor()
454
+ print("successsfully load paired self-attention")
455
+ print(f"number of the processor : {total_count}")
456
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
457
+ global mask1024,mask4096
458
+ mask1024, mask4096 = cal_attn_mask_xl(total_length,id_length,sa32,sa64,height,width,device=device,dtype= torch.float16)
459
+
460
+ ######### Gradio Fuction #############
461
+
462
+ def swap_to_gallery(images):
463
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
464
+
465
+ def upload_example_to_gallery(images, prompt, style, negative_prompt):
466
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
467
+
468
+ def remove_back_to_files():
469
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
470
+
471
+ def remove_tips():
472
+ return gr.update(visible=False)
473
+
474
+ def apply_style_positive(style_name: str, positive: str):
475
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
476
+ return p.replace("{prompt}", positive)
477
+
478
+ def apply_style(style_name: str, positives: list, negative: str = ""):
479
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
480
+ return [p.replace("{prompt}", positive) for positive in positives], n + ' ' + negative
481
+
482
+ def change_visiale_by_model_type(_model_type):
483
+ if _model_type == "Only Using Textual Description":
484
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False)
485
+ elif _model_type == "Using Ref Images":
486
+ return gr.update(visible=True), gr.update(visible=True), gr.update(visible=False)
487
+ else:
488
+ raise ValueError("Invalid model type",_model_type)
489
+
490
+
491
+ ######### Image Generation ##############
492
+ def process_generation(_sd_type,_model_type,_upload_images, _num_steps,style_name, _Ip_Adapter_Strength ,_style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt,prompt_array,G_height,G_width,_comic_type):
493
+ _model_type = "Photomaker" if _model_type == "Using Ref Images" else "original"
494
+ if _model_type == "Photomaker" and "img" not in general_prompt:
495
+ raise gr.Error("Please add the triger word \" img \" behind the class word you want to customize, such as: man img or woman img")
496
+ if _upload_images is None and _model_type != "original":
497
+ raise gr.Error(f"Cannot find any input face image!")
498
+ global sa32, sa64,id_length,total_length,attn_procs,unet,cur_model_type
499
+ global write
500
+ global cur_step,attn_count
501
+ global height,width
502
+ height = G_height
503
+ width = G_width
504
+ global pipe
505
+ global sd_model_path,models_dict
506
+ sd_model_path = models_dict[_sd_type]
507
+ use_safe_tensor = True
508
+ if cur_model_type != _sd_type+"-"+_model_type+""+str(id_length_):
509
+ if _sd_type == "Unstable":
510
+ use_safe_tensor = False
511
+ # apply the style template
512
+ ##### load pipe
513
+
514
+ if _model_type == "original":
515
+ pipe = StableDiffusionXLPipeline.from_pretrained(sd_model_path, torch_dtype=torch.float16, use_safetensors=use_safe_tensor)
516
+ pipe = pipe.to(device)
517
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
518
+ elif _model_type == "Photomaker":
519
+ pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
520
+ sd_model_path, torch_dtype=torch.float16, use_safetensors=use_safe_tensor)
521
+ pipe = pipe.to(device)
522
+ pipe.load_photomaker_adapter(
523
+ os.path.dirname(photomaker_path),
524
+ subfolder="",
525
+ weight_name=os.path.basename(photomaker_path),
526
+ trigger_word="img" # define the trigger word
527
+ )
528
+ pipe.fuse_lora()
529
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
530
+ else:
531
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
532
+ ##### ########################
533
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
534
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
535
+ cur_model_type = _sd_type+"-"+_model_type+""+str(id_length_)
536
+ else:
537
+ unet = pipe.unet
538
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
539
+ if _model_type != "original":
540
+ input_id_images = []
541
+ for img in _upload_images:
542
+ print(img)
543
+ input_id_images.append(load_image(img))
544
+ prompts = prompt_array.splitlines()
545
+ start_merge_step = int(float(_style_strength_ratio) / 100 * _num_steps)
546
+ if start_merge_step > 30:
547
+ start_merge_step = 30
548
+ print(f"start_merge_step:{start_merge_step}")
549
+ generator = torch.Generator(device="mps").manual_seed(seed_)
550
+ sa32, sa64 = sa32_, sa64_
551
+ id_length = id_length_
552
+ clipped_prompts = prompts[:]
553
+ prompts = [general_prompt + "," + prompt if "[NC]" not in prompt else prompt.replace("[NC]","") for prompt in clipped_prompts]
554
+ prompts = [prompt.rpartition('#')[0] if "#" in prompt else prompt for prompt in prompts]
555
+ print(prompts)
556
+ id_prompts = prompts[:id_length]
557
+ real_prompts = prompts[id_length:]
558
+ #torch.cuda.empty_cache()
559
+ write = True
560
+ cur_step = 0
561
+
562
+ attn_count = 0
563
+ id_prompts, negative_prompt = apply_style(style_name, id_prompts, negative_prompt)
564
+ setup_seed(seed_)
565
+ total_results = []
566
+ if _model_type == "original":
567
+ id_images = pipe(id_prompts, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
568
+ elif _model_type == "Photomaker":
569
+ id_images = pipe(id_prompts,input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
570
+ else:
571
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
572
+ total_results = id_images + total_results
573
+ yield total_results
574
+ real_images = []
575
+ write = False
576
+ for real_prompt in real_prompts:
577
+ setup_seed(seed_)
578
+ cur_step = 0
579
+ real_prompt = apply_style_positive(style_name, real_prompt)
580
+ if _model_type == "original":
581
+ real_images.append(pipe(real_prompt, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images[0])
582
+ elif _model_type == "Photomaker":
583
+ real_images.append(pipe(real_prompt, input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images[0])
584
+ else:
585
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
586
+ total_results = [real_images[-1]] + total_results
587
+ yield total_results
588
+ if _comic_type != "No typesetting (default)":
589
+ captions= prompt_array.splitlines()
590
+ captions = [caption.replace("[NC]","") for caption in captions]
591
+ captions = [caption.split('#')[-1] if "#" in caption else caption for caption in captions]
592
+ from PIL import ImageFont
593
+ total_results = get_comic(id_images + real_images, _comic_type,captions= captions,font=ImageFont.truetype("./fonts/Inkfree.ttf", int(45))) + total_results
594
+ yield total_results
595
+
596
+
597
+
598
+ def array2string(arr):
599
+ stringtmp = ""
600
+ for i,part in enumerate(arr):
601
+ if i != len(arr)-1:
602
+ stringtmp += part +"\n"
603
+ else:
604
+ stringtmp += part
605
+
606
+ return stringtmp
607
+
608
+
609
+ #################################################
610
+ #################################################
611
+ ### define the interface
612
+ with gr.Blocks(css=css) as demo:
613
+ binary_matrixes = gr.State([])
614
+ color_layout = gr.State([])
615
+
616
+ # gr.Markdown(logo)
617
+ gr.Markdown(title)
618
+ gr.Markdown(description)
619
+
620
+ with gr.Row():
621
+ with gr.Group(elem_id="main-image"):
622
+
623
+ prompts = []
624
+ colors = []
625
+
626
+ with gr.Column(visible=True) as gen_prompt_vis:
627
+ sd_type = gr.Dropdown(choices=list(models_dict.keys()), value = "Unstable",label="sd_type", info="Select pretrained model")
628
+ model_type = gr.Radio(["Only Using Textual Description", "Using Ref Images"], label="model_type", value = "Only Using Textual Description", info="Control type of the Character")
629
+ with gr.Group(visible=False) as control_image_input:
630
+ files = gr.Files(
631
+ label="Drag (Select) 1 or more photos of your face",
632
+ file_types=["image"],
633
+ )
634
+ uploaded_files = gr.Gallery(label="Your images", visible=False, columns=5, rows=1, height=200)
635
+ with gr.Column(visible=False) as clear_button:
636
+ remove_and_reupload = gr.ClearButton(value="Remove and upload new ones", components=files, size="sm")
637
+ general_prompt = gr.Textbox(value='', label="(1) Textual Description for Character", interactive=True)
638
+ negative_prompt = gr.Textbox(value='', label="(2) Negative_prompt", interactive=True)
639
+ style = gr.Dropdown(label="Style template", choices=STYLE_NAMES, value=DEFAULT_STYLE_NAME)
640
+ prompt_array = gr.Textbox(lines = 3,value='', label="(3) Comic Description (each line corresponds to a frame).", interactive=True)
641
+ with gr.Accordion("(4) Tune the hyperparameters", open=True):
642
+ sa32_ = gr.Slider(label=" (The degree of Paired Attention at 32 x 32 self-attention layers) ", minimum=0, maximum=1., value=0.5, step=0.1)
643
+ sa64_ = gr.Slider(label=" (The degree of Paired Attention at 64 x 64 self-attention layers) ", minimum=0, maximum=1., value=0.5, step=0.1)
644
+ id_length_ = gr.Slider(label= "Number of id images in total images" , minimum=2, maximum=4, value=2, step=1)
645
+ seed_ = gr.Slider(label="Seed", minimum=-1, maximum=MAX_SEED, value=0, step=1)
646
+ num_steps = gr.Slider(
647
+ label="Number of sample steps",
648
+ minimum=20,
649
+ maximum=100,
650
+ step=1,
651
+ value=50,
652
+ )
653
+ G_height = gr.Slider(
654
+ label="height",
655
+ minimum=256,
656
+ maximum=1024,
657
+ step=32,
658
+ value=768,
659
+ )
660
+ G_width = gr.Slider(
661
+ label="width",
662
+ minimum=256,
663
+ maximum=1024,
664
+ step=32,
665
+ value=768,
666
+ )
667
+ comic_type = gr.Radio(["No typesetting (default)", "Four Pannel", "Classic Comic Style"], value = "Classic Comic Style", label="Typesetting Style", info="Select the typesetting style ")
668
+ guidance_scale = gr.Slider(
669
+ label="Guidance scale",
670
+ minimum=0.1,
671
+ maximum=10.0,
672
+ step=0.1,
673
+ value=5,
674
+ )
675
+ style_strength_ratio = gr.Slider(
676
+ label="Style strength of Ref Image (%)",
677
+ minimum=15,
678
+ maximum=50,
679
+ step=1,
680
+ value=20,
681
+ visible=False
682
+ )
683
+ Ip_Adapter_Strength = gr.Slider(
684
+ label="Ip_Adapter_Strength",
685
+ minimum=0,
686
+ maximum=1,
687
+ step=0.1,
688
+ value=0.5,
689
+ visible=False
690
+ )
691
+ final_run_btn = gr.Button("Generate ! 😺")
692
+
693
+
694
+ with gr.Column():
695
+ out_image = gr.Gallery(label="Result", columns=2, height='auto')
696
+ generated_information = gr.Markdown(label="Generation Details", value="",visible=False)
697
+ gr.Markdown(version)
698
+ model_type.change(fn = change_visiale_by_model_type , inputs = model_type, outputs=[control_image_input,style_strength_ratio,Ip_Adapter_Strength])
699
+ files.upload(fn=swap_to_gallery, inputs=files, outputs=[uploaded_files, clear_button, files])
700
+ remove_and_reupload.click(fn=remove_back_to_files, outputs=[uploaded_files, clear_button, files])
701
+
702
+ final_run_btn.click(fn=set_text_unfinished, outputs = generated_information
703
+ ).then(process_generation, inputs=[sd_type,model_type,files, num_steps,style, Ip_Adapter_Strength,style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,G_height,G_width,comic_type], outputs=out_image
704
+ ).then(fn=set_text_finished,outputs = generated_information)
705
+
706
+
707
+ gr.Examples(
708
+ examples=[
709
+ [0,0.5,0.5,2,"a man, wearing black suit",
710
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
711
+ array2string(["at home, read new paper #at home, The newspaper says there is a treasure house in the forest.",
712
+ "on the road, near the forest",
713
+ "[NC] The car on the road, near the forest #He drives to the forest in search of treasure.",
714
+ "[NC]A tiger appeared in the forest, at night ",
715
+ "very frightened, open mouth, in the forest, at night",
716
+ "running very fast, in the forest, at night",
717
+ "[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!",
718
+ "in the house filled with treasure, laughing, at night #He is overjoyed inside the house."
719
+ ]),
720
+ "Comic book","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
721
+ ],
722
+ [1,0.5,0.5,3,"a woman img, wearing a white T-shirt, blue loose hair",
723
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
724
+ array2string(["wake up in the bed",
725
+ "have breakfast",
726
+ "is on the road, go to company",
727
+ "work in the company",
728
+ "Take a walk next to the company at noon",
729
+ "lying in bed at night"]),
730
+ "Japanese Anime", "Using Ref Images",get_image_path_list('./examples/taylor'),768,768
731
+ ],
732
+ [0,0.5,0.5,3,"a man, wearing black jacket",
733
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
734
+ array2string(["wake up in the bed",
735
+ "have breakfast",
736
+ "is on the road, go to the company, close look",
737
+ "work in the company",
738
+ "laughing happily",
739
+ "lying in bed at night"
740
+ ]),
741
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
742
+ ],
743
+ [0,0.3,0.5,3,"a girl, wearing white shirt, black skirt, black tie, yellow hair",
744
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
745
+ array2string([
746
+ "at home #at home, began to go to drawing",
747
+ "sitting alone on a park bench.",
748
+ "reading a book on a park bench.",
749
+ "[NC]A squirrel approaches, peeking over the bench. ",
750
+ "look around in the park. # She looks around and enjoys the beauty of nature.",
751
+ "[NC]leaf falls from the tree, landing on the sketchbook.",
752
+ "picks up the leaf, examining its details closely.",
753
+ "[NC]The brown squirrel appear.",
754
+ "is very happy # She is very happy to see the squirrel again",
755
+ "[NC]The brown squirrel takes the cracker and scampers up a tree. # She gives the squirrel cracker"]),
756
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
757
+ ]
758
+ ],
759
+ inputs=[seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,style,model_type,files,G_height,G_width],
760
+ # outputs=[post_sketch, binary_matrixes, *color_row, *colors, *prompts, gen_prompt_vis, general_prompt, seed_],
761
+ # run_on_click=True,
762
+ label='😺 Examples 😺',
763
+ )
764
+ gr.Markdown(article)
765
+
766
+
767
+ demo.launch(server_name="0.0.0.0", share = False)
oldversion/gradio_app_sdxl_specific_id_old_version.py ADDED
@@ -0,0 +1,782 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from email.policy import default
2
+ import gradio as gr
3
+ import numpy as np
4
+ import torch
5
+ from huggingface_hub import hf_hub_download
6
+ import requests
7
+ import random
8
+ import os
9
+ import sys
10
+ import pickle
11
+ from PIL import Image
12
+ from tqdm.auto import tqdm
13
+ from datetime import datetime
14
+ from utils.gradio_utils import is_torch2_available
15
+ if is_torch2_available():
16
+ from utils.gradio_utils import \
17
+ AttnProcessor2_0 as AttnProcessor
18
+ else:
19
+ from utils.gradio_utils import AttnProcessor
20
+
21
+ import diffusers
22
+ from diffusers import StableDiffusionXLPipeline
23
+ from utils import PhotoMakerStableDiffusionXLPipeline
24
+ from diffusers import DDIMScheduler
25
+ import torch.nn.functional as F
26
+ from utils.gradio_utils import cal_attn_mask_xl
27
+ import copy
28
+ import os
29
+ from diffusers.utils import load_image
30
+ from utils.utils import get_comic
31
+ from utils.style_template import styles
32
+ image_encoder_path = "./data/models/ip_adapter/sdxl_models/image_encoder"
33
+ ip_ckpt = "./data/models/ip_adapter/sdxl_models/ip-adapter_sdxl_vit-h.bin"
34
+ os.environ["no_proxy"] = "localhost,127.0.0.1,::1"
35
+ STYLE_NAMES = list(styles.keys())
36
+ DEFAULT_STYLE_NAME = "Japanese Anime"
37
+ global models_dict
38
+ models_dict = {
39
+ # "Juggernaut": "RunDiffusion/Juggernaut-XL-v9",
40
+ "RealVision": "SG161222/RealVisXL_V4.0" ,
41
+ "SDXL": "stabilityai/stable-diffusion-xl-base-1.0" ,
42
+ "Unstable": "stablediffusionapi/sdxl-unstable-diffusers-y"
43
+ }
44
+ photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
45
+ MAX_SEED = np.iinfo(np.int32).max
46
+ def setup_seed(seed):
47
+ torch.manual_seed(seed)
48
+ torch.cuda.manual_seed_all(seed)
49
+ np.random.seed(seed)
50
+ random.seed(seed)
51
+ torch.backends.cudnn.deterministic = True
52
+ def set_text_unfinished():
53
+ return gr.update(visible=True, value="<h3>(Not Finished) Generating ··· The intermediate results will be shown.</h3>")
54
+ def set_text_finished():
55
+ return gr.update(visible=True, value="<h3>Generation Finished</h3>")
56
+ #################################################
57
+ def get_image_path_list(folder_name):
58
+ image_basename_list = os.listdir(folder_name)
59
+ image_path_list = sorted([os.path.join(folder_name, basename) for basename in image_basename_list])
60
+ return image_path_list
61
+
62
+ #################################################
63
+ class SpatialAttnProcessor2_0(torch.nn.Module):
64
+ r"""
65
+ Attention processor for IP-Adapater for PyTorch 2.0.
66
+ Args:
67
+ hidden_size (`int`):
68
+ The hidden size of the attention layer.
69
+ cross_attention_dim (`int`):
70
+ The number of channels in the `encoder_hidden_states`.
71
+ text_context_len (`int`, defaults to 77):
72
+ The context length of the text features.
73
+ scale (`float`, defaults to 1.0):
74
+ the weight scale of image prompt.
75
+ """
76
+
77
+ def __init__(self, hidden_size = None, cross_attention_dim=None,id_length = 4,device = "cuda",dtype = torch.float16):
78
+ super().__init__()
79
+ if not hasattr(F, "scaled_dot_product_attention"):
80
+ raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
81
+ self.device = device
82
+ self.dtype = dtype
83
+ self.hidden_size = hidden_size
84
+ self.cross_attention_dim = cross_attention_dim
85
+ self.total_length = id_length + 1
86
+ self.id_length = id_length
87
+ self.id_bank = {}
88
+
89
+ def __call__(
90
+ self,
91
+ attn,
92
+ hidden_states,
93
+ encoder_hidden_states=None,
94
+ attention_mask=None,
95
+ temb=None):
96
+ # un_cond_hidden_states, cond_hidden_states = hidden_states.chunk(2)
97
+ # un_cond_hidden_states = self.__call2__(attn, un_cond_hidden_states,encoder_hidden_states,attention_mask,temb)
98
+ # 生成一个0到1之间的随机数
99
+ global total_count,attn_count,cur_step,mask1024,mask4096
100
+ global sa32, sa64
101
+ global write
102
+ global height,width
103
+ if write:
104
+ # print(f"white:{cur_step}")
105
+ self.id_bank[cur_step] = [hidden_states[:self.id_length].clone(), hidden_states[self.id_length:].clone()]
106
+ else:
107
+ encoder_hidden_states = torch.cat((self.id_bank[cur_step][0].to(self.device),hidden_states[:1],self.id_bank[cur_step][1].to(self.device),hidden_states[1:]))
108
+ # 判断随机数是否大于0.5
109
+ if cur_step <1:
110
+ hidden_states = self.__call2__(attn, hidden_states,None,attention_mask,temb)
111
+ else: # 256 1024 4096
112
+ random_number = random.random()
113
+ if cur_step <20:
114
+ rand_num = 0.3
115
+ else:
116
+ rand_num = 0.1
117
+ # print(f"hidden state shape {hidden_states.shape[1]}")
118
+ if random_number > rand_num:
119
+ # print("mask shape",mask1024.shape,mask4096.shape)
120
+ if not write:
121
+ if hidden_states.shape[1] == (height//32) * (width//32):
122
+ attention_mask = mask1024[mask1024.shape[0] // self.total_length * self.id_length:]
123
+ else:
124
+ attention_mask = mask4096[mask4096.shape[0] // self.total_length * self.id_length:]
125
+ else:
126
+ # print(self.total_length,self.id_length,hidden_states.shape,(height//32) * (width//32))
127
+ if hidden_states.shape[1] == (height//32) * (width//32):
128
+ attention_mask = mask1024[:mask1024.shape[0] // self.total_length * self.id_length,:mask1024.shape[0] // self.total_length * self.id_length]
129
+ else:
130
+ attention_mask = mask4096[:mask4096.shape[0] // self.total_length * self.id_length,:mask4096.shape[0] // self.total_length * self.id_length]
131
+ # print(attention_mask.shape)
132
+ # print("before attention",hidden_states.shape,attention_mask.shape,encoder_hidden_states.shape if encoder_hidden_states is not None else "None")
133
+ hidden_states = self.__call1__(attn, hidden_states,encoder_hidden_states,attention_mask,temb)
134
+ else:
135
+ hidden_states = self.__call2__(attn, hidden_states,None,attention_mask,temb)
136
+ attn_count +=1
137
+ if attn_count == total_count:
138
+ attn_count = 0
139
+ cur_step += 1
140
+ mask1024,mask4096 = cal_attn_mask_xl(self.total_length,self.id_length,sa32,sa64,height,width, device=self.device, dtype= self.dtype)
141
+
142
+ return hidden_states
143
+ def __call1__(
144
+ self,
145
+ attn,
146
+ hidden_states,
147
+ encoder_hidden_states=None,
148
+ attention_mask=None,
149
+ temb=None,
150
+ ):
151
+ # print("hidden state shape",hidden_states.shape,self.id_length)
152
+ residual = hidden_states
153
+ # if encoder_hidden_states is not None:
154
+ # raise Exception("not implement")
155
+ if attn.spatial_norm is not None:
156
+ hidden_states = attn.spatial_norm(hidden_states, temb)
157
+ input_ndim = hidden_states.ndim
158
+
159
+ if input_ndim == 4:
160
+ total_batch_size, channel, height, width = hidden_states.shape
161
+ hidden_states = hidden_states.view(total_batch_size, channel, height * width).transpose(1, 2)
162
+ total_batch_size,nums_token,channel = hidden_states.shape
163
+ img_nums = total_batch_size//2
164
+ hidden_states = hidden_states.view(-1,img_nums,nums_token,channel).reshape(-1,img_nums * nums_token,channel)
165
+
166
+ batch_size, sequence_length, _ = hidden_states.shape
167
+
168
+ if attn.group_norm is not None:
169
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
170
+
171
+ query = attn.to_q(hidden_states)
172
+
173
+ if encoder_hidden_states is None:
174
+ encoder_hidden_states = hidden_states # B, N, C
175
+ else:
176
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,nums_token,channel).reshape(-1,(self.id_length+1) * nums_token,channel)
177
+
178
+ key = attn.to_k(encoder_hidden_states)
179
+ value = attn.to_v(encoder_hidden_states)
180
+
181
+
182
+ inner_dim = key.shape[-1]
183
+ head_dim = inner_dim // attn.heads
184
+
185
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
186
+
187
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
188
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
189
+ # print(key.shape,value.shape,query.shape,attention_mask.shape)
190
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
191
+ # TODO: add support for attn.scale when we move to Torch 2.1
192
+ #print(query.shape,key.shape,value.shape,attention_mask.shape)
193
+ hidden_states = F.scaled_dot_product_attention(
194
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
195
+ )
196
+
197
+ hidden_states = hidden_states.transpose(1, 2).reshape(total_batch_size, -1, attn.heads * head_dim)
198
+ hidden_states = hidden_states.to(query.dtype)
199
+
200
+
201
+
202
+ # linear proj
203
+ hidden_states = attn.to_out[0](hidden_states)
204
+ # dropout
205
+ hidden_states = attn.to_out[1](hidden_states)
206
+
207
+ # if input_ndim == 4:
208
+ # tile_hidden_states = tile_hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
209
+
210
+ # if attn.residual_connection:
211
+ # tile_hidden_states = tile_hidden_states + residual
212
+
213
+ if input_ndim == 4:
214
+ hidden_states = hidden_states.transpose(-1, -2).reshape(total_batch_size, channel, height, width)
215
+ if attn.residual_connection:
216
+ hidden_states = hidden_states + residual
217
+ hidden_states = hidden_states / attn.rescale_output_factor
218
+ # print(hidden_states.shape)
219
+ return hidden_states
220
+ def __call2__(
221
+ self,
222
+ attn,
223
+ hidden_states,
224
+ encoder_hidden_states=None,
225
+ attention_mask=None,
226
+ temb=None):
227
+ residual = hidden_states
228
+
229
+ if attn.spatial_norm is not None:
230
+ hidden_states = attn.spatial_norm(hidden_states, temb)
231
+
232
+ input_ndim = hidden_states.ndim
233
+
234
+ if input_ndim == 4:
235
+ batch_size, channel, height, width = hidden_states.shape
236
+ hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
237
+
238
+ batch_size, sequence_length, channel = (
239
+ hidden_states.shape
240
+ )
241
+ # print(hidden_states.shape)
242
+ if attention_mask is not None:
243
+ attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
244
+ # scaled_dot_product_attention expects attention_mask shape to be
245
+ # (batch, heads, source_length, target_length)
246
+ attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
247
+
248
+ if attn.group_norm is not None:
249
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
250
+
251
+ query = attn.to_q(hidden_states)
252
+
253
+ if encoder_hidden_states is None:
254
+ encoder_hidden_states = hidden_states # B, N, C
255
+ else:
256
+ encoder_hidden_states = encoder_hidden_states.view(-1,self.id_length+1,sequence_length,channel).reshape(-1,(self.id_length+1) * sequence_length,channel)
257
+
258
+ key = attn.to_k(encoder_hidden_states)
259
+ value = attn.to_v(encoder_hidden_states)
260
+
261
+ inner_dim = key.shape[-1]
262
+ head_dim = inner_dim // attn.heads
263
+
264
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
265
+
266
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
267
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
268
+
269
+ # the output of sdp = (batch, num_heads, seq_len, head_dim)
270
+ # TODO: add support for attn.scale when we move to Torch 2.1
271
+ hidden_states = F.scaled_dot_product_attention(
272
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
273
+ )
274
+
275
+ hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
276
+ hidden_states = hidden_states.to(query.dtype)
277
+
278
+ # linear proj
279
+ hidden_states = attn.to_out[0](hidden_states)
280
+ # dropout
281
+ hidden_states = attn.to_out[1](hidden_states)
282
+
283
+ if input_ndim == 4:
284
+ hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
285
+
286
+ if attn.residual_connection:
287
+ hidden_states = hidden_states + residual
288
+
289
+ hidden_states = hidden_states / attn.rescale_output_factor
290
+
291
+ return hidden_states
292
+
293
+ def set_attention_processor(unet,id_length,is_ipadapter = False):
294
+ global attn_procs
295
+ attn_procs = {}
296
+ for name in unet.attn_processors.keys():
297
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
298
+ if name.startswith("mid_block"):
299
+ hidden_size = unet.config.block_out_channels[-1]
300
+ elif name.startswith("up_blocks"):
301
+ block_id = int(name[len("up_blocks.")])
302
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
303
+ elif name.startswith("down_blocks"):
304
+ block_id = int(name[len("down_blocks.")])
305
+ hidden_size = unet.config.block_out_channels[block_id]
306
+ if cross_attention_dim is None:
307
+ if name.startswith("up_blocks") :
308
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length = id_length)
309
+ else:
310
+ attn_procs[name] = AttnProcessor()
311
+ else:
312
+ if is_ipadapter:
313
+ attn_procs[name] = IPAttnProcessor2_0(
314
+ hidden_size=hidden_size,
315
+ cross_attention_dim=cross_attention_dim,
316
+ scale=1,
317
+ num_tokens=4,
318
+ ).to(unet.device, dtype=torch.float16)
319
+ else:
320
+ attn_procs[name] = AttnProcessor()
321
+
322
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
323
+ #################################################
324
+ #################################################
325
+ canvas_html = "<div id='canvas-root' style='max-width:400px; margin: 0 auto'></div>"
326
+ load_js = """
327
+ async () => {
328
+ const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/sketch-canvas.js"
329
+ fetch(url)
330
+ .then(res => res.text())
331
+ .then(text => {
332
+ const script = document.createElement('script');
333
+ script.type = "module"
334
+ script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
335
+ document.head.appendChild(script);
336
+ });
337
+ }
338
+ """
339
+
340
+ get_js_colors = """
341
+ async (canvasData) => {
342
+ const canvasEl = document.getElementById("canvas-root");
343
+ return [canvasEl._data]
344
+ }
345
+ """
346
+
347
+ css = '''
348
+ #color-bg{display:flex;justify-content: center;align-items: center;}
349
+ .color-bg-item{width: 100%; height: 32px}
350
+ #main_button{width:100%}
351
+ <style>
352
+ '''
353
+
354
+
355
+ #################################################
356
+ title = r"""
357
+ <h1 align="center">StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</h1>
358
+ """
359
+
360
+ description = r"""
361
+ <b>Official 🤗 Gradio demo</b> for <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'><b>StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation</b></a>.<br>
362
+ ❗️❗️❗️[<b>Important</b>] Personalization steps:<br>
363
+ 1️⃣ Enter a Textual Description for Character, if you add the Ref-Image, making sure to <b>follow the class word</b> you want to customize with the <b>trigger word</b>: `img`, such as: `man img` or `woman img` or `girl img`.<br>
364
+ 2️⃣ Enter the prompt array, each line corrsponds to one generated image.<br>
365
+ 3️⃣ Choose your preferred style template.<br>
366
+ 4️⃣ Click the <b>Submit</b> button to start customizing.
367
+ """
368
+
369
+ article = r"""
370
+
371
+ If StoryDiffusion is helpful, please help to ⭐ the <a href='https://github.com/HVision-NKU/StoryDiffusion' target='_blank'>Github Repo</a>. Thanks!
372
+ [![GitHub Stars](https://img.shields.io/github/stars/HVision-NKU/StoryDiffusion?style=social)](https://github.com/HVision-NKU/StoryDiffusion)
373
+ ---
374
+ 📝 **Citation**
375
+ <br>
376
+ If our work is useful for your research, please consider citing:
377
+
378
+ ```bibtex
379
+ @article{Zhou2024storydiffusion,
380
+ title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
381
+ author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
382
+ year={2024}
383
+ }
384
+ ```
385
+ 📋 **License**
386
+ <br>
387
+ The Contents you create are under Apache-2.0 LICENSE. The Code are under Attribution-NonCommercial 4.0 International.
388
+
389
+ 📧 **Contact**
390
+ <br>
391
+ If you have any questions, please feel free to reach me out at <b>[email protected]</b>.
392
+ """
393
+ version = r"""
394
+ <h3 align="center">StoryDiffusion Version 0.01 (test version)</h3>
395
+
396
+ <h5 >1. Support image ref image. (Cartoon Ref image is not support now)</h5>
397
+ <h5 >2. Support Typesetting Style and Captioning.(By default, the prompt is used as the caption for each image. If you need to change the caption, add a # at the end of each line. Only the part after the # will be added as a caption to the image.)</h5>
398
+ <h5 >3. [NC]symbol (The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the "[NC]" at the beginning of the line. For example, to generate a scene of falling leaves without any character, write: "[NC] The leaves are falling.")</h5>
399
+ <h5 align="center">Tips: </h4>
400
+ """
401
+ #################################################
402
+ global attn_count, total_count, id_length, total_length,cur_step, cur_model_type
403
+ global write
404
+ global sa32, sa64
405
+ global height,width
406
+ attn_count = 0
407
+ total_count = 0
408
+ cur_step = 0
409
+ id_length = 4
410
+ total_length = 5
411
+ cur_model_type = ""
412
+ device="cuda"
413
+ global attn_procs,unet
414
+ attn_procs = {}
415
+ ###
416
+ write = False
417
+ ###
418
+ sa32 = 0.5
419
+ sa64 = 0.5
420
+ height = 768
421
+ width = 768
422
+ ###
423
+ global pipe
424
+ global sd_model_path
425
+ pipe = None
426
+ sd_model_path = models_dict["RealVision"]#"SG161222/RealVisXL_V4.0"
427
+ ### LOAD Stable Diffusion Pipeline
428
+ pipe = StableDiffusionXLPipeline.from_pretrained(sd_model_path, torch_dtype=torch.float16, use_safetensors = True)
429
+ pipe = pipe.to(device)
430
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
431
+ # pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
432
+ pipe.scheduler.set_timesteps(50)
433
+ unet = pipe.unet
434
+ ### Insert PairedAttention
435
+ for name in unet.attn_processors.keys():
436
+ cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
437
+ if name.startswith("mid_block"):
438
+ hidden_size = unet.config.block_out_channels[-1]
439
+ elif name.startswith("up_blocks"):
440
+ block_id = int(name[len("up_blocks.")])
441
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
442
+ elif name.startswith("down_blocks"):
443
+ block_id = int(name[len("down_blocks.")])
444
+ hidden_size = unet.config.block_out_channels[block_id]
445
+ if cross_attention_dim is None and (name.startswith("up_blocks") ) :
446
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length = id_length)
447
+ total_count +=1
448
+ else:
449
+ attn_procs[name] = AttnProcessor()
450
+ print("successsfully load paired self-attention")
451
+ print(f"number of the processor : {total_count}")
452
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
453
+ global mask1024,mask4096
454
+ mask1024, mask4096 = cal_attn_mask_xl(total_length,id_length,sa32,sa64,height,width,device=device,dtype= torch.float16)
455
+
456
+ ######### Gradio Fuction #############
457
+
458
+ def swap_to_gallery(images):
459
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
460
+
461
+ def upload_example_to_gallery(images, prompt, style, negative_prompt):
462
+ return gr.update(value=images, visible=True), gr.update(visible=True), gr.update(visible=False)
463
+
464
+ def remove_back_to_files():
465
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=True)
466
+
467
+ def remove_tips():
468
+ return gr.update(visible=False)
469
+
470
+ def apply_style_positive(style_name: str, positive: str):
471
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
472
+ return p.replace("{prompt}", positive)
473
+
474
+ def apply_style(style_name: str, positives: list, negative: str = ""):
475
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
476
+ return [p.replace("{prompt}", positive) for positive in positives], n + ' ' + negative
477
+
478
+ def change_visiale_by_model_type(_model_type):
479
+ if _model_type == "Only Using Textual Description":
480
+ return gr.update(visible=False), gr.update(visible=False), gr.update(visible=False)
481
+ elif _model_type == "Using Ref Images":
482
+ return gr.update(visible=True), gr.update(visible=True), gr.update(visible=False)
483
+ else:
484
+ raise ValueError("Invalid model type",_model_type)
485
+
486
+
487
+ ######### Image Generation ##############
488
+ def process_generation(_sd_type,_model_type,_upload_images, _num_steps,style_name, _Ip_Adapter_Strength ,_style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt,prompt_array,G_height,G_width,_comic_type):
489
+ _model_type = "Photomaker" if _model_type == "Using Ref Images" else "original"
490
+ if _model_type == "Photomaker" and "img" not in general_prompt:
491
+ raise gr.Error("Please add the triger word \" img \" behind the class word you want to customize, such as: man img or woman img")
492
+ if _upload_images is None and _model_type != "original":
493
+ raise gr.Error(f"Cannot find any input face image!")
494
+ global sa32, sa64,id_length,total_length,attn_procs,unet,cur_model_type
495
+ global write
496
+ global cur_step,attn_count
497
+ global height,width
498
+ height = G_height
499
+ width = G_width
500
+ global pipe
501
+ global sd_model_path,models_dict
502
+ sd_model_path = models_dict[_sd_type]
503
+ use_safe_tensor = True
504
+ if cur_model_type != _sd_type+"-"+_model_type+""+str(id_length_):
505
+ if _sd_type == "Unstable":
506
+ use_safe_tensor = False
507
+ # apply the style template
508
+ ##### load pipe
509
+
510
+ if _model_type == "original":
511
+ pipe = StableDiffusionXLPipeline.from_pretrained(sd_model_path, torch_dtype=torch.float16, use_safetensors=use_safe_tensor)
512
+ pipe = pipe.to(device)
513
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
514
+ elif _model_type == "Photomaker":
515
+ pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
516
+ sd_model_path, torch_dtype=torch.float16, use_safetensors=use_safe_tensor)
517
+ pipe = pipe.to(device)
518
+ pipe.load_photomaker_adapter(
519
+ os.path.dirname(photomaker_path),
520
+ subfolder="",
521
+ weight_name=os.path.basename(photomaker_path),
522
+ trigger_word="img" # define the trigger word
523
+ )
524
+ pipe.fuse_lora()
525
+ set_attention_processor(pipe.unet,id_length_,is_ipadapter = False)
526
+ else:
527
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
528
+ ##### ########################
529
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
530
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
531
+ cur_model_type = _sd_type+"-"+_model_type+""+str(id_length_)
532
+ else:
533
+ unet = pipe.unet
534
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
535
+ if _model_type != "original":
536
+ input_id_images = []
537
+ for img in _upload_images:
538
+ print(img)
539
+ input_id_images.append(load_image(img))
540
+ prompts = prompt_array.splitlines()
541
+ start_merge_step = int(float(_style_strength_ratio) / 100 * _num_steps)
542
+ if start_merge_step > 30:
543
+ start_merge_step = 30
544
+ print(f"start_merge_step:{start_merge_step}")
545
+ generator = torch.Generator(device="cuda").manual_seed(seed_)
546
+ sa32, sa64 = sa32_, sa64_
547
+ id_length = id_length_
548
+ clipped_prompts = prompts[:]
549
+ nc_indexs = []
550
+ for ind,prompt in enumerate(clipped_prompts):
551
+ if "[NC]" in prompt:
552
+ nc_indexs.append(ind)
553
+ if ind < id_length:
554
+ raise gr.Error(f"The first {id_length} row is id prompts, cannot use [NC]!")
555
+ prompts = [general_prompt + "," + prompt if "[NC]" not in prompt else prompt.replace("[NC]","") for prompt in clipped_prompts]
556
+ prompts = [prompt.rpartition('#')[0] if "#" in prompt else prompt for prompt in prompts]
557
+ print(prompts)
558
+ id_prompts = prompts[:id_length]
559
+ real_prompts = prompts[id_length:]
560
+ torch.cuda.empty_cache()
561
+ write = True
562
+ cur_step = 0
563
+
564
+ attn_count = 0
565
+ id_prompts, negative_prompt = apply_style(style_name, id_prompts, negative_prompt)
566
+ setup_seed(seed_)
567
+ total_results = []
568
+ if _model_type == "original":
569
+ id_images = pipe(id_prompts, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
570
+ elif _model_type == "Photomaker":
571
+ id_images = pipe(id_prompts,input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images
572
+ else:
573
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
574
+ total_results = id_images + total_results
575
+ yield total_results
576
+ real_images = []
577
+ write = False
578
+ for ind,real_prompt in enumerate(real_prompts):
579
+ setup_seed(seed_)
580
+ cur_step = 0
581
+ real_prompt = apply_style_positive(style_name, real_prompt)
582
+ if _model_type == "original":
583
+ real_images.append(pipe(real_prompt, num_inference_steps=_num_steps, guidance_scale=guidance_scale, height = height, width = width,negative_prompt = negative_prompt,generator = generator).images[0])
584
+ elif _model_type == "Photomaker":
585
+ real_images.append(pipe(real_prompt, input_id_images=input_id_images, num_inference_steps=_num_steps, guidance_scale=guidance_scale, start_merge_step = start_merge_step, height = height, width = width,negative_prompt = negative_prompt,generator = generator,nc_flag = True if ind+id_length in nc_indexs else False ).images[0])
586
+ else:
587
+ raise NotImplementedError("You should choice between original and Photomaker!",f"But you choice {_model_type}")
588
+ total_results = [real_images[-1]] + total_results
589
+ yield total_results
590
+ if _comic_type != "No typesetting (default)":
591
+ captions= prompt_array.splitlines()
592
+ captions = [caption.replace("[NC]","") for caption in captions]
593
+ captions = [caption.split('#')[-1] if "#" in caption else caption for caption in captions]
594
+ from PIL import ImageFont
595
+ total_results = get_comic(id_images + real_images, _comic_type,captions= captions,font=ImageFont.truetype("./fonts/Inkfree.ttf", int(45))) + total_results
596
+ yield total_results
597
+
598
+
599
+
600
+ def array2string(arr):
601
+ stringtmp = ""
602
+ for i,part in enumerate(arr):
603
+ if i != len(arr)-1:
604
+ stringtmp += part +"\n"
605
+ else:
606
+ stringtmp += part
607
+
608
+ return stringtmp
609
+
610
+
611
+ #################################################
612
+ #################################################
613
+ ### define the interface
614
+ with gr.Blocks(css=css) as demo:
615
+ binary_matrixes = gr.State([])
616
+ color_layout = gr.State([])
617
+
618
+ # gr.Markdown(logo)
619
+ gr.Markdown(title)
620
+ gr.Markdown(description)
621
+
622
+ with gr.Row():
623
+ with gr.Group(elem_id="main-image"):
624
+
625
+ prompts = []
626
+ colors = []
627
+
628
+ with gr.Column(visible=True) as gen_prompt_vis:
629
+ sd_type = gr.Dropdown(choices=list(models_dict.keys()), value = "Unstable",label="sd_type", info="Select pretrained model")
630
+ model_type = gr.Radio(["Only Using Textual Description", "Using Ref Images"], label="model_type", value = "Only Using Textual Description", info="Control type of the Character")
631
+ with gr.Group(visible=False) as control_image_input:
632
+ files = gr.Files(
633
+ label="Drag (Select) 1 or more photos of your face",
634
+ file_types=["image"],
635
+ )
636
+ uploaded_files = gr.Gallery(label="Your images", visible=False, columns=5, rows=1, height=200)
637
+ with gr.Column(visible=False) as clear_button:
638
+ remove_and_reupload = gr.ClearButton(value="Remove and upload new ones", components=files, size="sm")
639
+ general_prompt = gr.Textbox(value='', label="(1) Textual Description for Character", interactive=True)
640
+ negative_prompt = gr.Textbox(value='', label="(2) Negative_prompt", interactive=True)
641
+ style = gr.Dropdown(label="Style template", choices=STYLE_NAMES, value=DEFAULT_STYLE_NAME)
642
+ prompt_array = gr.Textbox(lines = 3,value='', label="(3) Comic Description (each line corresponds to a frame).", interactive=True)
643
+ with gr.Accordion("(4) Tune the hyperparameters", open=True):
644
+ sa32_ = gr.Slider(label=" (The degree of Paired Attention at 32 x 32 self-attention layers) ", minimum=0, maximum=1., value=0.5, step=0.1)
645
+ sa64_ = gr.Slider(label=" (The degree of Paired Attention at 64 x 64 self-attention layers) ", minimum=0, maximum=1., value=0.5, step=0.1)
646
+ id_length_ = gr.Slider(label= "Number of id images in total images" , minimum=2, maximum=4, value=2, step=1)
647
+ seed_ = gr.Slider(label="Seed", minimum=-1, maximum=MAX_SEED, value=0, step=1)
648
+ num_steps = gr.Slider(
649
+ label="Number of sample steps",
650
+ minimum=20,
651
+ maximum=100,
652
+ step=1,
653
+ value=50,
654
+ )
655
+ G_height = gr.Slider(
656
+ label="height",
657
+ minimum=256,
658
+ maximum=1024,
659
+ step=32,
660
+ value=768,
661
+ )
662
+ G_width = gr.Slider(
663
+ label="width",
664
+ minimum=256,
665
+ maximum=1024,
666
+ step=32,
667
+ value=768,
668
+ )
669
+ comic_type = gr.Radio(["No typesetting (default)", "Four Pannel", "Classic Comic Style"], value = "Classic Comic Style", label="Typesetting Style", info="Select the typesetting style ")
670
+ guidance_scale = gr.Slider(
671
+ label="Guidance scale",
672
+ minimum=0.1,
673
+ maximum=10.0,
674
+ step=0.1,
675
+ value=5,
676
+ )
677
+ style_strength_ratio = gr.Slider(
678
+ label="Style strength of Ref Image (%)",
679
+ minimum=15,
680
+ maximum=50,
681
+ step=1,
682
+ value=20,
683
+ visible=False
684
+ )
685
+ Ip_Adapter_Strength = gr.Slider(
686
+ label="Ip_Adapter_Strength",
687
+ minimum=0,
688
+ maximum=1,
689
+ step=0.1,
690
+ value=0.5,
691
+ visible=False
692
+ )
693
+ final_run_btn = gr.Button("Generate ! 😺")
694
+
695
+
696
+ with gr.Column():
697
+ out_image = gr.Gallery(label="Result", columns=2, height='auto')
698
+ generated_information = gr.Markdown(label="Generation Details", value="",visible=False)
699
+ gr.Markdown(version)
700
+ model_type.change(fn = change_visiale_by_model_type , inputs = model_type, outputs=[control_image_input,style_strength_ratio,Ip_Adapter_Strength])
701
+ files.upload(fn=swap_to_gallery, inputs=files, outputs=[uploaded_files, clear_button, files])
702
+ remove_and_reupload.click(fn=remove_back_to_files, outputs=[uploaded_files, clear_button, files])
703
+
704
+ final_run_btn.click(fn=set_text_unfinished, outputs = generated_information
705
+ ).then(process_generation, inputs=[sd_type,model_type,files, num_steps,style, Ip_Adapter_Strength,style_strength_ratio, guidance_scale, seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,G_height,G_width,comic_type], outputs=out_image
706
+ ).then(fn=set_text_finished,outputs = generated_information)
707
+
708
+
709
+ gr.Examples(
710
+ examples=[
711
+ [0,0.5,0.5,2,"a man, wearing black suit",
712
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
713
+ array2string(["at home, read new paper #at home, The newspaper says there is a treasure house in the forest.",
714
+ "on the road, near the forest",
715
+ "[NC] The car on the road, near the forest #He drives to the forest in search of treasure.",
716
+ "[NC]A tiger appeared in the forest, at night ",
717
+ "very frightened, open mouth, in the forest, at night",
718
+ "running very fast, in the forest, at night",
719
+ "[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!",
720
+ "in the house filled with treasure, laughing, at night #He is overjoyed inside the house."
721
+ ]),
722
+ "Comic book","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
723
+ ],
724
+ [0,0.5,0.5,2,"a man, wearing black suit",
725
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
726
+ array2string(["at home, read new paper #at home, The newspaper says there is a treasure house in the forest.",
727
+ "on the road, near the forest",
728
+ "[NC] The car on the road, near the forest #He drives to the forest in search of treasure.",
729
+ "[NC]A tiger appeared in the forest, at night ",
730
+ "very frightened, open mouth, in the forest, at night",
731
+ "running very fast, in the forest, at night",
732
+ "[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!",
733
+ "in the house filled with treasure, laughing, at night #He is overjoyed inside the house."
734
+ ]),
735
+ "Comic book","Only Using Textual Description",get_image_path_list('./examples/Robert'),1024,1024
736
+ ],
737
+ [1,0.5,0.5,3,"a woman img, wearing a white T-shirt, blue loose hair",
738
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
739
+ array2string(["wake up in the bed",
740
+ "have breakfast",
741
+ "is on the road, go to company",
742
+ "work in the company",
743
+ "Take a walk next to the company at noon",
744
+ "lying in bed at night"]),
745
+ "Japanese Anime", "Using Ref Images",get_image_path_list('./examples/taylor'),768,768
746
+ ],
747
+ [0,0.5,0.5,3,"a man, wearing black jacket",
748
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
749
+ array2string(["wake up in the bed",
750
+ "have breakfast",
751
+ "is on the road, go to the company, close look",
752
+ "work in the company",
753
+ "laughing happily",
754
+ "lying in bed at night"
755
+ ]),
756
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
757
+ ],
758
+ [0,0.3,0.5,3,"a girl, wearing white shirt, black skirt, black tie, yellow hair",
759
+ "bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
760
+ array2string([
761
+ "at home #at home, began to go to drawing",
762
+ "sitting alone on a park bench.",
763
+ "reading a book on a park bench.",
764
+ "[NC]A squirrel approaches, peeking over the bench. ",
765
+ "look around in the park. # She looks around and enjoys the beauty of nature.",
766
+ "[NC]leaf falls from the tree, landing on the sketchbook.",
767
+ "picks up the leaf, examining its details closely.",
768
+ "[NC]The brown squirrel appear.",
769
+ "is very happy # She is very happy to see the squirrel again",
770
+ "[NC]The brown squirrel takes the cracker and scampers up a tree. # She gives the squirrel cracker"]),
771
+ "Japanese Anime","Only Using Textual Description",get_image_path_list('./examples/taylor'),768,768
772
+ ]
773
+ ],
774
+ inputs=[seed_, sa32_, sa64_, id_length_, general_prompt, negative_prompt, prompt_array,style,model_type,files,G_height,G_width],
775
+ # outputs=[post_sketch, binary_matrixes, *color_row, *colors, *prompts, gen_prompt_vis, general_prompt, seed_],
776
+ # run_on_click=True,
777
+ label='😺 Examples 😺',
778
+ )
779
+ gr.Markdown(article)
780
+
781
+
782
+ demo.launch(server_name="0.0.0.0", share = False)
predict.py ADDED
@@ -0,0 +1,781 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Prediction interface for Cog ⚙️
2
+ # https://cog.run/python
3
+
4
+ import os
5
+ import copy
6
+ import random
7
+ import subprocess
8
+ import numpy as np
9
+ import time
10
+ import torch
11
+ import torch.nn.functional as F
12
+ from PIL import ImageFont
13
+ from cog import BasePredictor, Input, Path, BaseModel
14
+ from diffusers import StableDiffusionXLPipeline, DDIMScheduler
15
+ from diffusers.utils import load_image
16
+
17
+ from utils import PhotoMakerStableDiffusionXLPipeline
18
+ from utils.style_template import styles
19
+ from utils.gradio_utils import (
20
+ AttnProcessor2_0 as AttnProcessor,
21
+ ) # with torch2 installed
22
+ from utils.gradio_utils import cal_attn_mask_xl
23
+ from utils.utils import get_comic
24
+
25
+ MODEL_URL = "https://weights.replicate.delivery/default/HVision_NKU/StoryDiffusion.tar"
26
+ MODEL_CACHE = "model_weights"
27
+ STYLE_NAMES = list(styles.keys())
28
+ DEFAULT_STYLE_NAME = "Japanese Anime"
29
+
30
+ global total_count, attn_count, cur_step, mask1024, mask4096, attn_procs, unet
31
+ global sa32, sa64
32
+ global write
33
+ global height, width
34
+
35
+
36
+ """
37
+ # load and upload the weights to replicate.delivery for faster booting on Replicate
38
+ models_dict = {
39
+ "RealVision": "SG161222/RealVisXL_V4.0",
40
+ "Unstable": "stablediffusionapi/sdxl-unstable-diffusers-y",
41
+ }
42
+ # photomaker_path = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
43
+ photomaker_path = f"{MODEL_CACHE}/PhotoMaker/photomaker-v1.bin"
44
+
45
+ pipe_unstable = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
46
+ models_dict["Unstable"],
47
+ torch_dtype=torch.float16,
48
+ use_safetensors=False,
49
+ )
50
+ pipe_unstable.save_pretrained(f"{MODEL_CACHE}/Unstable/stablediffusionapi/sdxl-unstable-diffusers-y")
51
+
52
+ pipe_realvision = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
53
+ models_dict["RealVision"], torch_dtype=torch.float16, use_safetensors=True
54
+ )
55
+ pipe_realvision.save_pretrained(f"{MODEL_CACHE}/RealVision/SG161222/RealVisXL_V4.0")
56
+ """
57
+
58
+
59
+ class ModelOutput(BaseModel):
60
+ comic: Path
61
+ individual_images: list[Path]
62
+
63
+
64
+ def download_weights(url, dest):
65
+ start = time.time()
66
+ print("downloading url: ", url)
67
+ print("downloading to: ", dest)
68
+ subprocess.check_call(["pget", "-x", url, dest], close_fds=False)
69
+ print("downloading took: ", time.time() - start)
70
+
71
+
72
+ def setup_seed(seed):
73
+ torch.manual_seed(seed)
74
+ torch.cuda.manual_seed_all(seed)
75
+ np.random.seed(seed)
76
+ random.seed(seed)
77
+ torch.backends.cudnn.deterministic = True
78
+
79
+
80
+ def apply_style_positive(style_name: str, positive: str):
81
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
82
+ return p.replace("{prompt}", positive)
83
+
84
+
85
+ def apply_style(style_name: str, positives: list, negative: str = ""):
86
+ p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME])
87
+ return [
88
+ p.replace("{prompt}", positive) for positive in positives
89
+ ], n + " " + negative
90
+
91
+
92
+ def set_attention_processor(unet, id_length, is_ipadapter=False):
93
+ global total_count
94
+ total_count = 0
95
+ attn_procs = {}
96
+ for name in unet.attn_processors.keys():
97
+ cross_attention_dim = (
98
+ None
99
+ if name.endswith("attn1.processor")
100
+ else unet.config.cross_attention_dim
101
+ )
102
+ if name.startswith("mid_block"):
103
+ hidden_size = unet.config.block_out_channels[-1]
104
+ elif name.startswith("up_blocks"):
105
+ block_id = int(name[len("up_blocks.")])
106
+ hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
107
+ elif name.startswith("down_blocks"):
108
+ block_id = int(name[len("down_blocks.")])
109
+ hidden_size = unet.config.block_out_channels[block_id]
110
+ if cross_attention_dim is None:
111
+ if name.startswith("up_blocks"):
112
+ attn_procs[name] = SpatialAttnProcessor2_0(id_length=id_length)
113
+ total_count += 1
114
+ else:
115
+ attn_procs[name] = AttnProcessor()
116
+ else:
117
+ if is_ipadapter:
118
+ attn_procs[name] = IPAttnProcessor2_0(
119
+ hidden_size=hidden_size,
120
+ cross_attention_dim=cross_attention_dim,
121
+ scale=1,
122
+ num_tokens=4,
123
+ ).to(unet.device, dtype=torch.float16)
124
+ else:
125
+ attn_procs[name] = AttnProcessor()
126
+
127
+ unet.set_attn_processor(copy.deepcopy(attn_procs))
128
+ print("Successfully load paired self-attention")
129
+ print(f"Number of the processor : {total_count}")
130
+
131
+
132
+ #################################################
133
+ ########Consistent Self-Attention################
134
+ #################################################
135
+ class SpatialAttnProcessor2_0(torch.nn.Module):
136
+ r"""
137
+ Attention processor for IP-Adapater for PyTorch 2.0.
138
+ Args:
139
+ hidden_size (`int`):
140
+ The hidden size of the attention layer.
141
+ cross_attention_dim (`int`):
142
+ The number of channels in the `encoder_hidden_states`.
143
+ text_context_len (`int`, defaults to 77):
144
+ The context length of the text features.
145
+ scale (`float`, defaults to 1.0):
146
+ the weight scale of image prompt.
147
+ """
148
+
149
+ def __init__(
150
+ self,
151
+ hidden_size=None,
152
+ cross_attention_dim=None,
153
+ id_length=4,
154
+ device="cuda",
155
+ dtype=torch.float16,
156
+ ):
157
+ super().__init__()
158
+ if not hasattr(F, "scaled_dot_product_attention"):
159
+ raise ImportError(
160
+ "AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0."
161
+ )
162
+ self.device = device
163
+ self.dtype = dtype
164
+ self.hidden_size = hidden_size
165
+ self.cross_attention_dim = cross_attention_dim
166
+ self.total_length = id_length + 1
167
+ self.id_length = id_length
168
+ self.id_bank = {}
169
+
170
+ def __call__(
171
+ self,
172
+ attn,
173
+ hidden_states,
174
+ encoder_hidden_states=None,
175
+ attention_mask=None,
176
+ temb=None,
177
+ ):
178
+ global total_count, attn_count, cur_step, mask1024, mask4096
179
+ global sa32, sa64
180
+ global write
181
+ global height, width
182
+ if write:
183
+ self.id_bank[cur_step] = [
184
+ hidden_states[: self.id_length],
185
+ hidden_states[self.id_length :],
186
+ ]
187
+ else:
188
+ encoder_hidden_states = torch.cat(
189
+ (
190
+ self.id_bank[cur_step][0].to(self.device),
191
+ hidden_states[:1],
192
+ self.id_bank[cur_step][1].to(self.device),
193
+ hidden_states[1:],
194
+ )
195
+ )
196
+ # skip in early step
197
+ if cur_step < 5:
198
+ hidden_states = self.__call2__(
199
+ attn, hidden_states, encoder_hidden_states, attention_mask, temb
200
+ )
201
+ else: # 256 1024 4096
202
+ random_number = random.random()
203
+ if cur_step < 20:
204
+ rand_num = 0.3
205
+ else:
206
+ rand_num = 0.1
207
+ if random_number > rand_num:
208
+ if not write:
209
+ if hidden_states.shape[1] == (height // 32) * (width // 32):
210
+ attention_mask = mask1024[
211
+ mask1024.shape[0] // self.total_length * self.id_length :
212
+ ]
213
+ else:
214
+ attention_mask = mask4096[
215
+ mask4096.shape[0] // self.total_length * self.id_length :
216
+ ]
217
+ else:
218
+ if hidden_states.shape[1] == (height // 32) * (width // 32):
219
+ attention_mask = mask1024[
220
+ : mask1024.shape[0] // self.total_length * self.id_length,
221
+ : mask1024.shape[0] // self.total_length * self.id_length,
222
+ ]
223
+ else:
224
+ attention_mask = mask4096[
225
+ : mask4096.shape[0] // self.total_length * self.id_length,
226
+ : mask4096.shape[0] // self.total_length * self.id_length,
227
+ ]
228
+ hidden_states = self.__call1__(
229
+ attn, hidden_states, encoder_hidden_states, attention_mask, temb
230
+ )
231
+ else:
232
+ hidden_states = self.__call2__(
233
+ attn, hidden_states, None, attention_mask, temb
234
+ )
235
+ attn_count += 1
236
+ if attn_count == total_count:
237
+ attn_count = 0
238
+ cur_step += 1
239
+ mask1024, mask4096 = cal_attn_mask_xl(
240
+ self.total_length,
241
+ self.id_length,
242
+ sa32,
243
+ sa64,
244
+ height,
245
+ width,
246
+ device=self.device,
247
+ dtype=self.dtype,
248
+ )
249
+
250
+ return hidden_states
251
+
252
+ def __call1__(
253
+ self,
254
+ attn,
255
+ hidden_states,
256
+ encoder_hidden_states=None,
257
+ attention_mask=None,
258
+ temb=None,
259
+ ):
260
+ residual = hidden_states
261
+ if attn.spatial_norm is not None:
262
+ hidden_states = attn.spatial_norm(hidden_states, temb)
263
+ input_ndim = hidden_states.ndim
264
+
265
+ if input_ndim == 4:
266
+ total_batch_size, channel, height, width = hidden_states.shape
267
+ hidden_states = hidden_states.view(
268
+ total_batch_size, channel, height * width
269
+ ).transpose(1, 2)
270
+ total_batch_size, nums_token, channel = hidden_states.shape
271
+ img_nums = total_batch_size // 2
272
+ hidden_states = hidden_states.view(-1, img_nums, nums_token, channel).reshape(
273
+ -1, img_nums * nums_token, channel
274
+ )
275
+
276
+ batch_size, sequence_length, _ = hidden_states.shape
277
+
278
+ if attn.group_norm is not None:
279
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(
280
+ 1, 2
281
+ )
282
+
283
+ query = attn.to_q(hidden_states)
284
+
285
+ if encoder_hidden_states is None:
286
+ encoder_hidden_states = hidden_states # B, N, C
287
+ else:
288
+ encoder_hidden_states = encoder_hidden_states.view(
289
+ -1, self.id_length + 1, nums_token, channel
290
+ ).reshape(-1, (self.id_length + 1) * nums_token, channel)
291
+
292
+ key = attn.to_k(encoder_hidden_states)
293
+ value = attn.to_v(encoder_hidden_states)
294
+
295
+ inner_dim = key.shape[-1]
296
+ head_dim = inner_dim // attn.heads
297
+
298
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
299
+
300
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
301
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
302
+ hidden_states = F.scaled_dot_product_attention(
303
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
304
+ )
305
+
306
+ hidden_states = hidden_states.transpose(1, 2).reshape(
307
+ total_batch_size, -1, attn.heads * head_dim
308
+ )
309
+ hidden_states = hidden_states.to(query.dtype)
310
+
311
+ # linear proj
312
+ hidden_states = attn.to_out[0](hidden_states)
313
+ # dropout
314
+ hidden_states = attn.to_out[1](hidden_states)
315
+
316
+ if input_ndim == 4:
317
+ hidden_states = hidden_states.transpose(-1, -2).reshape(
318
+ total_batch_size, channel, height, width
319
+ )
320
+ if attn.residual_connection:
321
+ hidden_states = hidden_states + residual
322
+ hidden_states = hidden_states / attn.rescale_output_factor
323
+ # print(hidden_states.shape)
324
+ return hidden_states
325
+
326
+ def __call2__(
327
+ self,
328
+ attn,
329
+ hidden_states,
330
+ encoder_hidden_states=None,
331
+ attention_mask=None,
332
+ temb=None,
333
+ ):
334
+ residual = hidden_states
335
+
336
+ if attn.spatial_norm is not None:
337
+ hidden_states = attn.spatial_norm(hidden_states, temb)
338
+
339
+ input_ndim = hidden_states.ndim
340
+
341
+ if input_ndim == 4:
342
+ batch_size, channel, height, width = hidden_states.shape
343
+ hidden_states = hidden_states.view(
344
+ batch_size, channel, height * width
345
+ ).transpose(1, 2)
346
+
347
+ batch_size, sequence_length, channel = hidden_states.shape
348
+ # print(hidden_states.shape)
349
+ if attention_mask is not None:
350
+ attention_mask = attn.prepare_attention_mask(
351
+ attention_mask, sequence_length, batch_size
352
+ )
353
+ # scaled_dot_product_attention expects attention_mask shape to be
354
+ # (batch, heads, source_length, target_length)
355
+ attention_mask = attention_mask.view(
356
+ batch_size, attn.heads, -1, attention_mask.shape[-1]
357
+ )
358
+
359
+ if attn.group_norm is not None:
360
+ hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(
361
+ 1, 2
362
+ )
363
+
364
+ query = attn.to_q(hidden_states)
365
+
366
+ if encoder_hidden_states is None:
367
+ encoder_hidden_states = hidden_states # B, N, C
368
+ else:
369
+ encoder_hidden_states = encoder_hidden_states.view(
370
+ -1, self.id_length + 1, sequence_length, channel
371
+ ).reshape(-1, (self.id_length + 1) * sequence_length, channel)
372
+
373
+ key = attn.to_k(encoder_hidden_states)
374
+ value = attn.to_v(encoder_hidden_states)
375
+
376
+ inner_dim = key.shape[-1]
377
+ head_dim = inner_dim // attn.heads
378
+
379
+ query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
380
+
381
+ key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
382
+ value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
383
+
384
+ hidden_states = F.scaled_dot_product_attention(
385
+ query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
386
+ )
387
+
388
+ hidden_states = hidden_states.transpose(1, 2).reshape(
389
+ batch_size, -1, attn.heads * head_dim
390
+ )
391
+ hidden_states = hidden_states.to(query.dtype)
392
+
393
+ # linear proj
394
+ hidden_states = attn.to_out[0](hidden_states)
395
+ # dropout
396
+ hidden_states = attn.to_out[1](hidden_states)
397
+
398
+ if input_ndim == 4:
399
+ hidden_states = hidden_states.transpose(-1, -2).reshape(
400
+ batch_size, channel, height, width
401
+ )
402
+
403
+ if attn.residual_connection:
404
+ hidden_states = hidden_states + residual
405
+
406
+ hidden_states = hidden_states / attn.rescale_output_factor
407
+
408
+ return hidden_states
409
+
410
+
411
+ class Predictor(BasePredictor):
412
+ def setup(self) -> None:
413
+ """Load the model into memory to make running multiple predictions efficient"""
414
+
415
+ models_dict = {
416
+ "RealVision": "SG161222/RealVisXL_V4.0",
417
+ "Unstable": "stablediffusionapi/sdxl-unstable-diffusers-y",
418
+ }
419
+
420
+ if not os.path.exists(MODEL_CACHE):
421
+ download_weights(MODEL_URL, MODEL_CACHE)
422
+
423
+ photomaker_path = f"{MODEL_CACHE}/PhotoMaker/photomaker-v1.bin"
424
+
425
+ self.sdxl_pipe_unstable = StableDiffusionXLPipeline.from_pretrained(
426
+ f"{MODEL_CACHE}/Unstable/sdxl/stablediffusionapi/sdxl-unstable-diffusers-y",
427
+ torch_dtype=torch.float16,
428
+ )
429
+ self.sdxl_pipe_realvision = StableDiffusionXLPipeline.from_pretrained(
430
+ f"{MODEL_CACHE}/RealVision/sdxl/SG161222/RealVisXL_V4.0",
431
+ torch_dtype=torch.float16,
432
+ )
433
+
434
+ self.pipe_unstable = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
435
+ f"{MODEL_CACHE}/Unstable/stablediffusionapi/sdxl-unstable-diffusers-y",
436
+ torch_dtype=torch.float16,
437
+ use_safetensors=False,
438
+ )
439
+ self.pipe_unstable.load_photomaker_adapter(
440
+ os.path.dirname(photomaker_path),
441
+ subfolder="",
442
+ weight_name=os.path.basename(photomaker_path),
443
+ trigger_word="img", # define the trigger word
444
+ )
445
+
446
+ self.pipe_realvision = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
447
+ f"{MODEL_CACHE}/RealVision/SG161222/RealVisXL_V4.0",
448
+ torch_dtype=torch.float16,
449
+ use_safetensors=True,
450
+ )
451
+ self.pipe_realvision.load_photomaker_adapter(
452
+ os.path.dirname(photomaker_path),
453
+ subfolder="",
454
+ weight_name=os.path.basename(photomaker_path),
455
+ trigger_word="img", # define the trigger word
456
+ )
457
+ self.pipe_realvision.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
458
+ self.pipe_realvision.fuse_lora()
459
+
460
+ @torch.inference_mode()
461
+ def predict(
462
+ self,
463
+ sd_model: str = Input(
464
+ description="Choose a model",
465
+ choices=["Unstable", "RealVision"],
466
+ default="Unstable",
467
+ ),
468
+ ref_image: Path = Input(
469
+ description="Reference image for the character",
470
+ default=None,
471
+ ),
472
+ character_description: str = Input(
473
+ description="General description of the character. If ref_image above is provided, making sure to follow the class word you want to customize with the trigger word 'img', such as: 'man img' or 'woman img' or 'girl img'",
474
+ default="a man, wearing black suit",
475
+ ),
476
+ negative_prompt: str = Input(
477
+ description="Describe things you do not want to see in the output",
478
+ default="bad anatomy, bad hands, missing fingers, extra fingers, three hands, three legs, bad arms, missing legs, missing arms, poorly drawn face, bad face, fused face, cloned face, three crus, fused feet, fused thigh, extra crus, ugly fingers, horn, cartoon, cg, 3d, unreal, animate, amputation, disconnected limbs",
479
+ ),
480
+ comic_description: str = Input(
481
+ description="Comic Description. Each frame is divided by a new line. Only the first 10 prompts are valid for demo speed! For comic_description NOT using ref_image: (1) Support Typesetting Style and Captioning. By default, the prompt is used as the caption for each image. If you need to change the caption, add a '#' at the end of each line. Only the part after the '#' will be added as a caption to the image. (2) The [NC] symbol is used as a flag to indicate that no characters should be present in the generated scene images. If you want do that, prepend the '[NC]' at the beginning of the line.",
482
+ default="at home, read new paper #at home, The newspaper says there is a treasure house in the forest.\non the road, near the forest\n[NC] The car on the road, near the forest #He drives to the forest in search of treasure.\n[NC]A tiger appeared in the forest, at night \nvery frightened, open mouth, in the forest, at night\nrunning very fast, in the forest, at night\n[NC] A house in the forest, at night #Suddenly, he discovers the treasure house!\nin the house filled with treasure, laughing, at night #He is overjoyed inside the house.",
483
+ ),
484
+ style_name: str = Input(
485
+ description="Style template",
486
+ choices=STYLE_NAMES,
487
+ default=DEFAULT_STYLE_NAME,
488
+ ),
489
+ comic_style: str = Input(
490
+ description="Select the comic style for the combined comic",
491
+ choices=["Four Pannel", "Classic Comic Style"],
492
+ default="Classic Comic Style",
493
+ ),
494
+ style_strength_ratio: int = Input(
495
+ description="Style strength of Ref Image (%), only used if ref_image is provided",
496
+ default=20,
497
+ ge=15,
498
+ le=50,
499
+ ),
500
+ image_width: int = Input(
501
+ description="Width of output image",
502
+ choices=[
503
+ 256,
504
+ 288,
505
+ 320,
506
+ 352,
507
+ 384,
508
+ 416,
509
+ 448,
510
+ 480,
511
+ 512,
512
+ 544,
513
+ 576,
514
+ 608,
515
+ 640,
516
+ 672,
517
+ 704,
518
+ 736,
519
+ 768,
520
+ 800,
521
+ 832,
522
+ 864,
523
+ 896,
524
+ 928,
525
+ 960,
526
+ 992,
527
+ 1024,
528
+ ],
529
+ default=768,
530
+ ),
531
+ image_height: int = Input(
532
+ description="Height of output image",
533
+ choices=[
534
+ 256,
535
+ 288,
536
+ 320,
537
+ 352,
538
+ 384,
539
+ 416,
540
+ 448,
541
+ 480,
542
+ 512,
543
+ 544,
544
+ 576,
545
+ 608,
546
+ 640,
547
+ 672,
548
+ 704,
549
+ 736,
550
+ 768,
551
+ 800,
552
+ 832,
553
+ 864,
554
+ 896,
555
+ 928,
556
+ 960,
557
+ 992,
558
+ 1024,
559
+ ],
560
+ default=768,
561
+ ),
562
+ num_steps: int = Input(
563
+ description="Number of sample steps", ge=20, le=50, default=25
564
+ ),
565
+ guidance_scale: float = Input(
566
+ description="Scale for classifier-free guidance", ge=0.1, le=10, default=5
567
+ ),
568
+ seed: int = Input(
569
+ description="Random seed. Leave blank to randomize the seed", default=None
570
+ ),
571
+ sa32_setting: float = Input(
572
+ description="The degree of Paired Attention at 32 x 32 self-attention layers",
573
+ default=0.5,
574
+ ge=0,
575
+ le=1.0,
576
+ ),
577
+ sa64_setting: float = Input(
578
+ description="The degree of Paired Attention at 64 x 64 self-attention layers",
579
+ default=0.5,
580
+ ge=0,
581
+ le=1.0,
582
+ ),
583
+ num_ids: int = Input(
584
+ description="Number of id images in total images. This should not exceed total number of line-separated prompts",
585
+ default=3,
586
+ ),
587
+ output_format: str = Input(
588
+ description="Format of the output images",
589
+ choices=["webp", "jpg", "png"],
590
+ default="webp",
591
+ ),
592
+ output_quality: int = Input(
593
+ description="Quality of the output images, from 0 to 100. 100 is best quality, 0 is lowest quality",
594
+ default=80,
595
+ ge=0,
596
+ le=100,
597
+ ),
598
+ ) -> ModelOutput:
599
+ """Run a single prediction on the model"""
600
+
601
+ global total_count, attn_count, cur_step, mask1024, mask4096, attn_procs, unet
602
+ global sa32, sa64
603
+ global write
604
+ global height, width
605
+
606
+ assert (
607
+ len(character_description.strip()) > 0
608
+ ), "Please provide the description of the character."
609
+
610
+ if ref_image is not None:
611
+ assert (
612
+ "img" in character_description
613
+ ), f"When using ref_image, please add the trigger word 'img' behind the class word you want to customize, such as: man img or woman img"
614
+ assert (
615
+ "[NC]" not in comic_description
616
+ ), "You should not use trigger word [NC] when ref_image is provided."
617
+
618
+ height = image_height
619
+ width = image_width
620
+ id_length = num_ids
621
+ sa32 = sa32_setting
622
+ sa64 = sa64_setting
623
+
624
+ clipped_prompts = comic_description.splitlines()[:10]
625
+ print(clipped_prompts)
626
+ prompts = [
627
+ (
628
+ character_description + "," + prompt
629
+ if "[NC]" not in prompt
630
+ else prompt.replace("[NC]", "")
631
+ )
632
+ for prompt in clipped_prompts
633
+ ]
634
+ print(prompts)
635
+ prompts = [
636
+ prompt.rpartition("#")[0].strip() if "#" in prompt else prompt.strip()
637
+ for prompt in prompts
638
+ ]
639
+ print(prompts)
640
+ assert id_length <= len(
641
+ prompts
642
+ ), "id_length should not exceed total number of line-separated prompts"
643
+
644
+ id_prompts = prompts[:id_length]
645
+ real_prompts = prompts[id_length:]
646
+
647
+ if seed is None:
648
+ seed = int.from_bytes(os.urandom(2), "big")
649
+ print(f"Using seed: {seed}")
650
+
651
+ device = "cuda:0"
652
+ setup_seed(seed)
653
+ generator = torch.Generator(device=device).manual_seed(seed)
654
+
655
+ torch.cuda.empty_cache()
656
+
657
+ model_type = "original" if ref_image is None else "Photomaker"
658
+
659
+ if model_type == "original":
660
+ pipe = (
661
+ self.sdxl_pipe_realvision
662
+ if style_name == "(No style)"
663
+ else self.sdxl_pipe_unstable
664
+ )
665
+ pipe = pipe.to(device)
666
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
667
+ else:
668
+ if sd_model != "RealVision" and style_name != "(No style)":
669
+ pipe = self.pipe_unstable.to(device)
670
+ else:
671
+ pipe = self.pipe_realvision.to(device)
672
+ pipe.id_encoder.to(device)
673
+
674
+ write = True
675
+ cur_step = 0
676
+ attn_count = 0
677
+
678
+ set_attention_processor(pipe.unet, id_length, is_ipadapter=False)
679
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
680
+ pipe.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2)
681
+ curmodel_type = sd_model + "-" + model_type + "" + str(id_length)
682
+
683
+ id_prompts, negative_prompt = apply_style(
684
+ style_name, id_prompts, negative_prompt
685
+ )
686
+
687
+ total_results = []
688
+ if model_type == "original":
689
+ id_images = pipe(
690
+ id_prompts,
691
+ num_inference_steps=num_steps,
692
+ guidance_scale=guidance_scale,
693
+ height=height,
694
+ width=width,
695
+ negative_prompt=negative_prompt,
696
+ generator=generator,
697
+ ).images
698
+ else:
699
+ input_id_images = [load_image(str(ref_image))]
700
+ start_merge_step = int(float(style_strength_ratio) / 100 * num_steps)
701
+ id_images = pipe(
702
+ id_prompts,
703
+ input_id_images=input_id_images,
704
+ num_inference_steps=num_steps,
705
+ guidance_scale=guidance_scale,
706
+ start_merge_step=start_merge_step,
707
+ height=height,
708
+ width=width,
709
+ negative_prompt=negative_prompt,
710
+ generator=generator,
711
+ ).images
712
+
713
+ total_results = id_images + total_results
714
+
715
+ real_images = []
716
+ write = False
717
+ for real_prompt in real_prompts:
718
+ cur_step = 0
719
+ real_prompt = apply_style_positive(style_name, real_prompt)
720
+ if model_type == "original":
721
+ real_images.append(
722
+ pipe(
723
+ real_prompt,
724
+ num_inference_steps=num_steps,
725
+ guidance_scale=guidance_scale,
726
+ height=height,
727
+ width=width,
728
+ negative_prompt=negative_prompt,
729
+ generator=generator,
730
+ ).images[0]
731
+ )
732
+ else:
733
+ real_images.append(
734
+ pipe(
735
+ real_prompt,
736
+ input_id_images=input_id_images,
737
+ num_inference_steps=num_steps,
738
+ guidance_scale=guidance_scale,
739
+ start_merge_step=start_merge_step,
740
+ height=height,
741
+ width=width,
742
+ negative_prompt=negative_prompt,
743
+ generator=generator,
744
+ ).images[0]
745
+ )
746
+
747
+ total_results = [real_images[-1]] + total_results
748
+
749
+ captions = clipped_prompts
750
+ captions = [caption.replace("[NC]", "") for caption in captions]
751
+ captions = [
752
+ caption.split("#")[-1].strip() if "#" in caption else caption.strip()
753
+ for caption in captions
754
+ ]
755
+
756
+ comic = get_comic(
757
+ id_images + real_images,
758
+ comic_style,
759
+ captions=captions,
760
+ font=ImageFont.truetype("./fonts/Inkfree.ttf", int(45)),
761
+ )
762
+
763
+ extension = output_format.lower()
764
+ extension = "jpeg" if extension == "jpg" else extension
765
+ comic_out = f"/tmp/comic.{extension}"
766
+ comic[0].save(comic_out)
767
+
768
+ save_params = {"format": extension.upper()}
769
+ if not output_format == "png":
770
+ save_params["quality"] = output_quality
771
+ save_params["optimize"] = True
772
+
773
+ output_paths = []
774
+ for index, sample in enumerate(total_results[::-1]):
775
+ output_filename = f"/tmp/out-{index}.{extension}"
776
+ sample.save(output_filename, **save_params)
777
+ output_paths.append(Path(output_filename))
778
+
779
+ del pipe
780
+
781
+ return ModelOutput(comic=Path(comic_out), individual_images=output_paths)
requirements.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio==4.22.0
2
+ xformers==0.0.20
3
+ torch==2.0.1
4
+ torchvision==0.15.2
5
+ diffusers==0.25.0
6
+ transformers==4.36.2
7
+ huggingface-hub==0.20.2
8
+ spaces==0.19.4
9
+ numpy
10
+ accelerate
11
+ safetensors
12
+ omegaconf
13
+ peft
14
+ httpx==0.27.0
15
+ safetensors==0.4.0
results/20240520-164843/image_0.png ADDED

Git LFS Details

  • SHA256: e26d6e1bdf4d0e6951828c0a944a0754236c5428c3e14c6252205c2c8c57e3d8
  • Pointer size: 132 Bytes
  • Size of remote file: 3.04 MB
results/20240520-164843/image_1.png ADDED
results/20240520-164843/image_2.png ADDED
results/20240520-164843/image_3.png ADDED
results/20240520-164843/image_4.png ADDED
results/20240520-164843/image_5.png ADDED
results_examples/image1.png ADDED

Git LFS Details

  • SHA256: 0fe3c748813c1503b369c8b84bf35331e316cb12aacc6503536d11d9a514088e
  • Pointer size: 132 Bytes
  • Size of remote file: 8.3 MB
sample_data/README.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This directory includes a few sample datasets to get you started.
2
+
3
+ * `california_housing_data*.csv` is California housing data from the 1990 US
4
+ Census; more information is available at:
5
+ https://developers.google.com/machine-learning/crash-course/california-housing-data-description
6
+
7
+ * `mnist_*.csv` is a small sample of the
8
+ [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
9
+ described at: http://yann.lecun.com/exdb/mnist/
10
+
11
+ * `anscombe.json` contains a copy of
12
+ [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
13
+ was originally described in
14
+
15
+ Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
16
+ Statistician. 27 (1): 17-21. JSTOR 2682899.
17
+
18
+ and our copy was prepared by the
19
+ [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
sample_data/anscombe.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {"Series":"I", "X":10.0, "Y":8.04},
3
+ {"Series":"I", "X":8.0, "Y":6.95},
4
+ {"Series":"I", "X":13.0, "Y":7.58},
5
+ {"Series":"I", "X":9.0, "Y":8.81},
6
+ {"Series":"I", "X":11.0, "Y":8.33},
7
+ {"Series":"I", "X":14.0, "Y":9.96},
8
+ {"Series":"I", "X":6.0, "Y":7.24},
9
+ {"Series":"I", "X":4.0, "Y":4.26},
10
+ {"Series":"I", "X":12.0, "Y":10.84},
11
+ {"Series":"I", "X":7.0, "Y":4.81},
12
+ {"Series":"I", "X":5.0, "Y":5.68},
13
+
14
+ {"Series":"II", "X":10.0, "Y":9.14},
15
+ {"Series":"II", "X":8.0, "Y":8.14},
16
+ {"Series":"II", "X":13.0, "Y":8.74},
17
+ {"Series":"II", "X":9.0, "Y":8.77},
18
+ {"Series":"II", "X":11.0, "Y":9.26},
19
+ {"Series":"II", "X":14.0, "Y":8.10},
20
+ {"Series":"II", "X":6.0, "Y":6.13},
21
+ {"Series":"II", "X":4.0, "Y":3.10},
22
+ {"Series":"II", "X":12.0, "Y":9.13},
23
+ {"Series":"II", "X":7.0, "Y":7.26},
24
+ {"Series":"II", "X":5.0, "Y":4.74},
25
+
26
+ {"Series":"III", "X":10.0, "Y":7.46},
27
+ {"Series":"III", "X":8.0, "Y":6.77},
28
+ {"Series":"III", "X":13.0, "Y":12.74},
29
+ {"Series":"III", "X":9.0, "Y":7.11},
30
+ {"Series":"III", "X":11.0, "Y":7.81},
31
+ {"Series":"III", "X":14.0, "Y":8.84},
32
+ {"Series":"III", "X":6.0, "Y":6.08},
33
+ {"Series":"III", "X":4.0, "Y":5.39},
34
+ {"Series":"III", "X":12.0, "Y":8.15},
35
+ {"Series":"III", "X":7.0, "Y":6.42},
36
+ {"Series":"III", "X":5.0, "Y":5.73},
37
+
38
+ {"Series":"IV", "X":8.0, "Y":6.58},
39
+ {"Series":"IV", "X":8.0, "Y":5.76},
40
+ {"Series":"IV", "X":8.0, "Y":7.71},
41
+ {"Series":"IV", "X":8.0, "Y":8.84},
42
+ {"Series":"IV", "X":8.0, "Y":8.47},
43
+ {"Series":"IV", "X":8.0, "Y":7.04},
44
+ {"Series":"IV", "X":8.0, "Y":5.25},
45
+ {"Series":"IV", "X":19.0, "Y":12.50},
46
+ {"Series":"IV", "X":8.0, "Y":5.56},
47
+ {"Series":"IV", "X":8.0, "Y":7.91},
48
+ {"Series":"IV", "X":8.0, "Y":6.89}
49
+ ]
sample_data/california_housing_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
sample_data/california_housing_train.csv ADDED
The diff for this file is too large to render. See raw diff
 
sample_data/mnist_test.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51c292478d94ec3a01461bdfa82eb0885d262eb09e615679b2d69dedb6ad09e7
3
+ size 18289443
sample_data/mnist_train_small.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ef64781aa03180f4f5ce504314f058f5d0227277df86060473d973cf43b033e
3
+ size 36523880
storydiffusionpipeline.py ADDED
File without changes
update.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Update History
2
+
3
+ ### Update 2023-05-14
4
+
5
+ - Support Two persons,support for more characters will also be possible in the feature. In Pnhotomaker, currently, only one person can appear in a single image.
6
+ - Auto Save generated images in the ‘results’ folder.
7
+ - I have changed the way to fill in prompts; please refer to the example provided.
8
+
9
+ ### Update 2024-05-08
10
+
11
+ - Support [NC] in Ref Image Model (Photomaker work best in 1024x1024 but may cost a lot of GPU memory, I recommend you to use the res. as larger as possible)
12
+
13
+ <img src="results_examples/image1.png" height=100>
14
+
15
+ - Merge Push by @cryptowooser to support lastest pillow. But you may be updated pillow if you using the old version.
16
+
17
+
18
+
19
+ ### Todo
20
+
21
+ - Support add captions on all images for the classical commic Typesetting Style
22
+
23
+
24
+
25
+
26
+ ### Welcome to contribute
27
+
28
+ - Various layout styles.