abhishek-kumar yinshengming commited on
Commit
c13eb67
0 Parent(s):

Duplicate from microsoft/visual_chatgpt

Browse files

Co-authored-by: yinshengming <[email protected]>

Files changed (7) hide show
  1. .gitattributes +34 -0
  2. README.md +14 -0
  3. app.py +206 -0
  4. image/placeholder.txt +0 -0
  5. packages.txt +1 -0
  6. requirements.txt +32 -0
  7. visual_foundation_models.py +735 -0
.gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Visual Chatgpt
3
+ emoji: 🎨
4
+ colorFrom: yellow
5
+ colorTo: yellow
6
+ sdk: gradio
7
+ sdk_version: 3.20.1
8
+ app_file: app.py
9
+ pinned: false
10
+ license: osl-3.0
11
+ duplicated_from: microsoft/visual_chatgpt
12
+ ---
13
+
14
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ VISUAL_CHATGPT_PREFIX = """Visual ChatGPT is designed to be able to assist with a wide range of text and visual related tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. Visual ChatGPT is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
2
+
3
+ Visual ChatGPT is able to process and understand large amounts of text and image. As a language model, Visual ChatGPT can not directly read images, but it has a list of tools to finish different visual tasks. Each image will have a file name formed as "image/xxx.png", and Visual ChatGPT can invoke different tools to indirectly understand pictures. When talking about images, Visual ChatGPT is very strict to the file name and will never fabricate nonexistent files. When using tools to generate new image files, Visual ChatGPT is also known that the image may not be the same as user's demand, and will use other visual question answering tools or description tools to observe the real image. Visual ChatGPT is able to use tools in a sequence, and is loyal to the tool observation outputs rather than faking the image content and image file name. It will remember to provide the file name from the last tool observation, if a new image is generated.
4
+
5
+ Human may provide new figures to Visual ChatGPT with a description. The description helps Visual ChatGPT to understand this image, but Visual ChatGPT should use tools to finish following tasks, rather than directly imagine from the description.
6
+
7
+ Overall, Visual ChatGPT is a powerful visual dialogue assistant tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
8
+
9
+
10
+ TOOLS:
11
+ ------
12
+
13
+ Visual ChatGPT has access to the following tools:"""
14
+
15
+ VISUAL_CHATGPT_FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
16
+
17
+ ```
18
+ Thought: Do I need to use a tool? Yes
19
+ Action: the action to take, should be one of [{tool_names}]
20
+ Action Input: the input to the action
21
+ Observation: the result of the action
22
+ ```
23
+
24
+ When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
25
+
26
+ ```
27
+ Thought: Do I need to use a tool? No
28
+ {ai_prefix}: [your response here]
29
+ ```
30
+ """
31
+
32
+ VISUAL_CHATGPT_SUFFIX = """You are very strict to the filename correctness and will never fake a file name if not exists.
33
+ You will remember to provide the image file name loyally if it's provided in the last tool observation.
34
+
35
+ Begin!
36
+
37
+ Previous conversation history:
38
+ {chat_history}
39
+
40
+ New input: {input}
41
+ Since Visual ChatGPT is a text language model, Visual ChatGPT must use tools to observe images rather than imagination.
42
+ The thoughts and observations are only visible for Visual ChatGPT, Visual ChatGPT should remember to repeat important information in the final response for Human.
43
+ Thought: Do I need to use a tool? {agent_scratchpad}"""
44
+
45
+ from visual_foundation_models import *
46
+ from langchain.agents.initialize import initialize_agent
47
+ from langchain.agents.tools import Tool
48
+ from langchain.chains.conversation.memory import ConversationBufferMemory
49
+ from langchain.llms.openai import OpenAI
50
+ import re
51
+ import gradio as gr
52
+
53
+
54
+ def cut_dialogue_history(history_memory, keep_last_n_words=400):
55
+ if history_memory is None or len(history_memory) == 0:
56
+ return history_memory
57
+ tokens = history_memory.split()
58
+ n_tokens = len(tokens)
59
+ print(f"history_memory:{history_memory}, n_tokens: {n_tokens}")
60
+ if n_tokens < keep_last_n_words:
61
+ return history_memory
62
+ paragraphs = history_memory.split('\n')
63
+ last_n_tokens = n_tokens
64
+ while last_n_tokens >= keep_last_n_words:
65
+ last_n_tokens -= len(paragraphs[0].split(' '))
66
+ paragraphs = paragraphs[1:]
67
+ return '\n' + '\n'.join(paragraphs)
68
+
69
+
70
+ class ConversationBot:
71
+ def __init__(self, load_dict):
72
+ # load_dict = {'VisualQuestionAnswering':'cuda:0', 'ImageCaptioning':'cuda:1',...}
73
+ print(f"Initializing VisualChatGPT, load_dict={load_dict}")
74
+ if 'ImageCaptioning' not in load_dict:
75
+ raise ValueError("You have to load ImageCaptioning as a basic function for VisualChatGPT")
76
+
77
+ self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
78
+ self.models = dict()
79
+ for class_name, device in load_dict.items():
80
+ self.models[class_name] = globals()[class_name](device=device)
81
+
82
+ self.tools = []
83
+ for class_name, instance in self.models.items():
84
+ for e in dir(instance):
85
+ if e.startswith('inference'):
86
+ func = getattr(instance, e)
87
+ self.tools.append(Tool(name=func.name, description=func.description, func=func))
88
+
89
+ def run_text(self, text, state):
90
+ self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500)
91
+ res = self.agent({"input": text})
92
+ res['output'] = res['output'].replace("\\", "/")
93
+ response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output'])
94
+ state = state + [(text, response)]
95
+ print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n"
96
+ f"Current Memory: {self.agent.memory.buffer}")
97
+ return state, state
98
+
99
+ def run_image(self, image, state, txt):
100
+ image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png")
101
+ print("======>Auto Resize Image...")
102
+ img = Image.open(image.name)
103
+ width, height = img.size
104
+ ratio = min(512 / width, 512 / height)
105
+ width_new, height_new = (round(width * ratio), round(height * ratio))
106
+ width_new = int(np.round(width_new / 64.0)) * 64
107
+ height_new = int(np.round(height_new / 64.0)) * 64
108
+ img = img.resize((width_new, height_new))
109
+ img = img.convert('RGB')
110
+ img.save(image_filename, "PNG")
111
+ print(f"Resize image form {width}x{height} to {width_new}x{height_new}")
112
+ description = self.models['ImageCaptioning'].inference(image_filename)
113
+ Human_prompt = f'\nHuman: provide a figure named {image_filename}. The description is: {description}. This information helps you to understand this image, but you should use tools to finish following tasks, rather than directly imagine from my description. If you understand, say \"Received\". \n'
114
+ AI_prompt = "Received. "
115
+ self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
116
+ state = state + [(f"![](/file={image_filename})*{image_filename}*", AI_prompt)]
117
+ print(f"\nProcessed run_image, Input image: {image_filename}\nCurrent state: {state}\n"
118
+ f"Current Memory: {self.agent.memory.buffer}")
119
+ return state, state, f'{txt} {image_filename} '
120
+
121
+ def init_agent(self, openai_api_key):
122
+ self.llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
123
+ self.agent = initialize_agent(
124
+ self.tools,
125
+ self.llm,
126
+ agent="conversational-react-description",
127
+ verbose=True,
128
+ memory=self.memory,
129
+ return_intermediate_steps=True,
130
+ agent_kwargs={'prefix': VISUAL_CHATGPT_PREFIX, 'format_instructions': VISUAL_CHATGPT_FORMAT_INSTRUCTIONS, 'suffix': VISUAL_CHATGPT_SUFFIX}, )
131
+
132
+ return gr.update(visible = True)
133
+
134
+ bot = ConversationBot({'Text2Image': 'cuda:0',
135
+ 'ImageCaptioning': 'cuda:0',
136
+ 'ImageEditing': 'cuda:0',
137
+ 'VisualQuestionAnswering': 'cuda:0',
138
+ 'Image2Canny': 'cpu',
139
+ 'CannyText2Image': 'cuda:0',
140
+ 'InstructPix2Pix': 'cuda:0',
141
+ 'Image2Depth': 'cpu',
142
+ 'DepthText2Image': 'cuda:0',
143
+ })
144
+
145
+ with gr.Blocks(css="#chatbot {overflow:auto; height:500px;}") as demo:
146
+ gr.Markdown("<h3><center>Visual ChatGPT</center></h3>")
147
+ gr.Markdown(
148
+ """This is a demo to the work [Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models](https://github.com/microsoft/visual-chatgpt).<br>
149
+ This space connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting.<br>
150
+ This space currently only supports English (目前只支持英文对话, 中文正在开发中).<br>
151
+ """
152
+ )
153
+
154
+ with gr.Row():
155
+ openai_api_key_textbox = gr.Textbox(
156
+ placeholder="Paste your OpenAI API key here to start Visual ChatGPT(sk-...) and press Enter ↵️",
157
+ show_label=False,
158
+ lines=1,
159
+ type="password",
160
+ )
161
+
162
+ chatbot = gr.Chatbot(elem_id="chatbot", label="Visual ChatGPT")
163
+ state = gr.State([])
164
+
165
+ with gr.Row(visible=False) as input_raws:
166
+ with gr.Column(scale=0.7):
167
+ txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style(container=False)
168
+ with gr.Column(scale=0.10, min_width=0):
169
+ run = gr.Button("🏃‍♂️Run")
170
+ with gr.Column(scale=0.10, min_width=0):
171
+ clear = gr.Button("🔄Clear️")
172
+ with gr.Column(scale=0.10, min_width=0):
173
+ btn = gr.UploadButton("🖼️Upload", file_types=["image"])
174
+
175
+ gr.Examples(
176
+ examples=["Generate a figure of a cat running in the garden",
177
+ "Replace the cat with a dog",
178
+ "Remove the dog in this image",
179
+ "Can you detect the canny edge of this image?",
180
+ "Can you use this canny image to generate an oil painting of a dog",
181
+ "Make it like water-color painting",
182
+ "What is the background color",
183
+ "Describe this image",
184
+ "please detect the depth of this image",
185
+ "Can you use this depth image to generate a cute dog",
186
+ ],
187
+ inputs=txt
188
+ )
189
+
190
+ gr.HTML('''<br><br><br><center>You can duplicate this Space to skip the queue:
191
+ <a href="https://huggingface.co/spaces/microsoft/visual_chatgpt?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a><br>
192
+ </center>''')
193
+
194
+
195
+
196
+ openai_api_key_textbox.submit(bot.init_agent, [openai_api_key_textbox], [input_raws])
197
+ txt.submit(bot.run_text, [txt, state], [chatbot, state])
198
+ txt.submit(lambda: "", None, txt)
199
+ run.click(bot.run_text, [txt, state], [chatbot, state])
200
+ run.click(lambda: "", None, txt)
201
+ btn.upload(bot.run_image, [btn, state, txt], [chatbot, state, txt])
202
+ clear.click(bot.memory.clear)
203
+ clear.click(lambda: [], None, chatbot)
204
+ clear.click(lambda: [], None, state)
205
+
206
+ demo.queue(concurrency_count=10).launch(server_name="0.0.0.0", server_port=7860)
image/placeholder.txt ADDED
File without changes
packages.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ python3-opencv
requirements.txt ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ --extra-index-url https://download.pytorch.org/whl/cu113
2
+ torch==1.12.1
3
+ torchvision==0.13.1
4
+ numpy==1.23.1
5
+ transformers==4.26.1
6
+ albumentations==1.3.0
7
+ opencv-contrib-python==4.3.0.36
8
+ imageio==2.9.0
9
+ imageio-ffmpeg==0.4.2
10
+ pytorch-lightning==1.5.0
11
+ omegaconf==2.1.1
12
+ test-tube>=0.7.5
13
+ streamlit==1.12.1
14
+ einops==0.3.0
15
+ webdataset==0.2.5
16
+ kornia==0.6
17
+ open_clip_torch==2.0.2
18
+ invisible-watermark>=0.1.5
19
+ streamlit-drawable-canvas==0.8.0
20
+ torchmetrics==0.6.0
21
+ timm==0.6.12
22
+ addict==2.4.0
23
+ yapf==0.32.0
24
+ prettytable==3.6.0
25
+ safetensors==0.2.7
26
+ basicsr==1.4.2
27
+ langchain==0.0.101
28
+ diffusers==0.14.0
29
+ gradio
30
+ openai
31
+ accelerate
32
+ controlnet-aux==0.0.1
visual_foundation_models.py ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionInstructPix2PixPipeline
2
+ from diffusers import EulerAncestralDiscreteScheduler
3
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
4
+ from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector
5
+
6
+ from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation
7
+ from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering
8
+ from transformers import AutoImageProcessor, UperNetForSemanticSegmentation
9
+
10
+ import os
11
+ import random
12
+ import torch
13
+ import cv2
14
+ import uuid
15
+ from PIL import Image
16
+ import numpy as np
17
+ from pytorch_lightning import seed_everything
18
+
19
+ def prompts(name, description):
20
+ def decorator(func):
21
+ func.name = name
22
+ func.description = description
23
+ return func
24
+
25
+ return decorator
26
+
27
+ def get_new_image_name(org_img_name, func_name="update"):
28
+ head_tail = os.path.split(org_img_name)
29
+ head = head_tail[0]
30
+ tail = head_tail[1]
31
+ name_split = tail.split('.')[0].split('_')
32
+ this_new_uuid = str(uuid.uuid4())[0:4]
33
+ if len(name_split) == 1:
34
+ most_org_file_name = name_split[0]
35
+ recent_prev_file_name = name_split[0]
36
+ new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
37
+ else:
38
+ assert len(name_split) == 4
39
+ most_org_file_name = name_split[3]
40
+ recent_prev_file_name = name_split[0]
41
+ new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
42
+ return os.path.join(head, new_file_name)
43
+
44
+
45
+ class MaskFormer:
46
+ def __init__(self, device):
47
+ print(f"Initializing MaskFormer to {device}")
48
+ self.device = device
49
+ self.processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
50
+ self.model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined").to(device)
51
+
52
+ def inference(self, image_path, text):
53
+ threshold = 0.5
54
+ min_area = 0.02
55
+ padding = 20
56
+ original_image = Image.open(image_path)
57
+ image = original_image.resize((512, 512))
58
+ inputs = self.processor(text=text, images=image, padding="max_length", return_tensors="pt").to(self.device)
59
+ with torch.no_grad():
60
+ outputs = self.model(**inputs)
61
+ mask = torch.sigmoid(outputs[0]).squeeze().cpu().numpy() > threshold
62
+ area_ratio = len(np.argwhere(mask)) / (mask.shape[0] * mask.shape[1])
63
+ if area_ratio < min_area:
64
+ return None
65
+ true_indices = np.argwhere(mask)
66
+ mask_array = np.zeros_like(mask, dtype=bool)
67
+ for idx in true_indices:
68
+ padded_slice = tuple(slice(max(0, i - padding), i + padding + 1) for i in idx)
69
+ mask_array[padded_slice] = True
70
+ visual_mask = (mask_array * 255).astype(np.uint8)
71
+ image_mask = Image.fromarray(visual_mask)
72
+ return image_mask.resize(original_image.size)
73
+
74
+
75
+ class ImageEditing:
76
+ def __init__(self, device):
77
+ print(f"Initializing ImageEditing to {device}")
78
+ self.device = device
79
+ self.mask_former = MaskFormer(device=self.device)
80
+ self.revision = 'fp16' if 'cuda' in device else None
81
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
82
+ self.inpaint = StableDiffusionInpaintPipeline.from_pretrained(
83
+ "runwayml/stable-diffusion-inpainting", revision=self.revision, torch_dtype=self.torch_dtype).to(device)
84
+
85
+ @prompts(name="Remove Something From The Photo",
86
+ description="useful when you want to remove and object or something from the photo "
87
+ "from its description or location. "
88
+ "The input to this tool should be a comma separated string of two, "
89
+ "representing the image_path and the object need to be removed. ")
90
+ def inference_remove(self, inputs):
91
+ image_path, to_be_removed_txt = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
92
+ return self.inference_replace(f"{image_path},{to_be_removed_txt},background")
93
+
94
+ @prompts(name="Replace Something From The Photo",
95
+ description="useful when you want to replace an object from the object description or "
96
+ "location with another object from its description. "
97
+ "The input to this tool should be a comma separated string of three, "
98
+ "representing the image_path, the object to be replaced, the object to be replaced with ")
99
+ def inference_replace(self, inputs):
100
+ image_path, to_be_replaced_txt, replace_with_txt = inputs.split(",")
101
+ original_image = Image.open(image_path)
102
+ original_size = original_image.size
103
+ mask_image = self.mask_former.inference(image_path, to_be_replaced_txt)
104
+ updated_image = self.inpaint(prompt=replace_with_txt, image=original_image.resize((512, 512)),
105
+ mask_image=mask_image.resize((512, 512))).images[0]
106
+ updated_image_path = get_new_image_name(image_path, func_name="replace-something")
107
+ updated_image = updated_image.resize(original_size)
108
+ updated_image.save(updated_image_path)
109
+ print(
110
+ f"\nProcessed ImageEditing, Input Image: {image_path}, Replace {to_be_replaced_txt} to {replace_with_txt}, "
111
+ f"Output Image: {updated_image_path}")
112
+ return updated_image_path
113
+
114
+
115
+ class InstructPix2Pix:
116
+ def __init__(self, device):
117
+ print(f"Initializing InstructPix2Pix to {device}")
118
+ self.device = device
119
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
120
+ self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix",
121
+ safety_checker=None,
122
+ torch_dtype=self.torch_dtype).to(device)
123
+ self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config)
124
+
125
+ @prompts(name="Instruct Image Using Text",
126
+ description="useful when you want to the style of the image to be like the text. "
127
+ "like: make it look like a painting. or make it like a robot. "
128
+ "The input to this tool should be a comma separated string of two, "
129
+ "representing the image_path and the text. ")
130
+ def inference(self, inputs):
131
+ """Change style of image."""
132
+ print("===>Starting InstructPix2Pix Inference")
133
+ image_path, text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
134
+ original_image = Image.open(image_path)
135
+ image = self.pipe(text, image=original_image, num_inference_steps=40, image_guidance_scale=1.2).images[0]
136
+ updated_image_path = get_new_image_name(image_path, func_name="pix2pix")
137
+ image.save(updated_image_path)
138
+ print(f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct Text: {text}, "
139
+ f"Output Image: {updated_image_path}")
140
+ return updated_image_path
141
+
142
+
143
+ class Text2Image:
144
+ def __init__(self, device):
145
+ print(f"Initializing Text2Image to {device}")
146
+ self.device = device
147
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
148
+ self.pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
149
+ torch_dtype=self.torch_dtype)
150
+ self.pipe.to(device)
151
+ self.a_prompt = 'best quality, extremely detailed'
152
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
153
+ 'fewer digits, cropped, worst quality, low quality'
154
+
155
+ @prompts(name="Generate Image From User Input Text",
156
+ description="useful when you want to generate an image from a user input text and save it to a file. "
157
+ "like: generate an image of an object or something, or generate an image that includes some objects. "
158
+ "The input to this tool should be a string, representing the text used to generate image. ")
159
+ def inference(self, text):
160
+ image_filename = os.path.join('image', f"{str(uuid.uuid4())[:8]}.png")
161
+ prompt = text + ', ' + self.a_prompt
162
+ image = self.pipe(prompt, negative_prompt=self.n_prompt).images[0]
163
+ image.save(image_filename)
164
+ print(
165
+ f"\nProcessed Text2Image, Input Text: {text}, Output Image: {image_filename}")
166
+ return image_filename
167
+
168
+
169
+ class ImageCaptioning:
170
+ def __init__(self, device):
171
+ print(f"Initializing ImageCaptioning to {device}")
172
+ self.device = device
173
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
174
+ self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
175
+ self.model = BlipForConditionalGeneration.from_pretrained(
176
+ "Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype).to(self.device)
177
+
178
+ @prompts(name="Get Photo Description",
179
+ description="useful when you want to know what is inside the photo. receives image_path as input. "
180
+ "The input to this tool should be a string, representing the image_path. ")
181
+ def inference(self, image_path):
182
+ inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device, self.torch_dtype)
183
+ out = self.model.generate(**inputs)
184
+ captions = self.processor.decode(out[0], skip_special_tokens=True)
185
+ print(f"\nProcessed ImageCaptioning, Input Image: {image_path}, Output Text: {captions}")
186
+ return captions
187
+
188
+
189
+ class Image2Canny:
190
+ def __init__(self, device):
191
+ print("Initializing Image2Canny")
192
+ self.low_threshold = 100
193
+ self.high_threshold = 200
194
+
195
+ @prompts(name="Edge Detection On Image",
196
+ description="useful when you want to detect the edge of the image. "
197
+ "like: detect the edges of this image, or canny detection on image, "
198
+ "or perform edge detection on this image, or detect the canny image of this image. "
199
+ "The input to this tool should be a string, representing the image_path")
200
+ def inference(self, inputs):
201
+ image = Image.open(inputs)
202
+ image = np.array(image)
203
+ canny = cv2.Canny(image, self.low_threshold, self.high_threshold)
204
+ canny = canny[:, :, None]
205
+ canny = np.concatenate([canny, canny, canny], axis=2)
206
+ canny = Image.fromarray(canny)
207
+ updated_image_path = get_new_image_name(inputs, func_name="edge")
208
+ canny.save(updated_image_path)
209
+ print(f"\nProcessed Image2Canny, Input Image: {inputs}, Output Text: {updated_image_path}")
210
+ return updated_image_path
211
+
212
+
213
+ class CannyText2Image:
214
+ def __init__(self, device):
215
+ print(f"Initializing CannyText2Image to {device}")
216
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
217
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-canny",
218
+ torch_dtype=self.torch_dtype)
219
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
220
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
221
+ torch_dtype=self.torch_dtype)
222
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
223
+ self.pipe.to(device)
224
+ self.seed = -1
225
+ self.a_prompt = 'best quality, extremely detailed'
226
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
227
+ 'fewer digits, cropped, worst quality, low quality'
228
+
229
+ @prompts(name="Generate Image Condition On Canny Image",
230
+ description="useful when you want to generate a new real image from both the user description and a canny image."
231
+ " like: generate a real image of a object or something from this canny image,"
232
+ " or generate a new real image of a object or something from this edge image. "
233
+ "The input to this tool should be a comma separated string of two, "
234
+ "representing the image_path and the user description. ")
235
+ def inference(self, inputs):
236
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
237
+ image = Image.open(image_path)
238
+ self.seed = random.randint(0, 65535)
239
+ seed_everything(self.seed)
240
+ prompt = f'{instruct_text}, {self.a_prompt}'
241
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
242
+ guidance_scale=9.0).images[0]
243
+ updated_image_path = get_new_image_name(image_path, func_name="canny2image")
244
+ image.save(updated_image_path)
245
+ print(f"\nProcessed CannyText2Image, Input Canny: {image_path}, Input Text: {instruct_text}, "
246
+ f"Output Text: {updated_image_path}")
247
+ return updated_image_path
248
+
249
+
250
+ class Image2Line:
251
+ def __init__(self, device):
252
+ print("Initializing Image2Line")
253
+ self.detector = MLSDdetector.from_pretrained('lllyasviel/ControlNet')
254
+
255
+ @prompts(name="Line Detection On Image",
256
+ description="useful when you want to detect the straight line of the image. "
257
+ "like: detect the straight lines of this image, or straight line detection on image, "
258
+ "or perform straight line detection on this image, or detect the straight line image of this image. "
259
+ "The input to this tool should be a string, representing the image_path")
260
+ def inference(self, inputs):
261
+ image = Image.open(inputs)
262
+ mlsd = self.detector(image)
263
+ updated_image_path = get_new_image_name(inputs, func_name="line-of")
264
+ mlsd.save(updated_image_path)
265
+ print(f"\nProcessed Image2Line, Input Image: {inputs}, Output Line: {updated_image_path}")
266
+ return updated_image_path
267
+
268
+
269
+ class LineText2Image:
270
+ def __init__(self, device):
271
+ print(f"Initializing LineText2Image to {device}")
272
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
273
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-mlsd",
274
+ torch_dtype=self.torch_dtype)
275
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
276
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
277
+ torch_dtype=self.torch_dtype
278
+ )
279
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
280
+ self.pipe.to(device)
281
+ self.seed = -1
282
+ self.a_prompt = 'best quality, extremely detailed'
283
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
284
+ 'fewer digits, cropped, worst quality, low quality'
285
+
286
+ @prompts(name="Generate Image Condition On Line Image",
287
+ description="useful when you want to generate a new real image from both the user description "
288
+ "and a straight line image. "
289
+ "like: generate a real image of a object or something from this straight line image, "
290
+ "or generate a new real image of a object or something from this straight lines. "
291
+ "The input to this tool should be a comma separated string of two, "
292
+ "representing the image_path and the user description. ")
293
+ def inference(self, inputs):
294
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
295
+ image = Image.open(image_path)
296
+ self.seed = random.randint(0, 65535)
297
+ seed_everything(self.seed)
298
+ prompt = f'{instruct_text}, {self.a_prompt}'
299
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
300
+ guidance_scale=9.0).images[0]
301
+ updated_image_path = get_new_image_name(image_path, func_name="line2image")
302
+ image.save(updated_image_path)
303
+ print(f"\nProcessed LineText2Image, Input Line: {image_path}, Input Text: {instruct_text}, "
304
+ f"Output Text: {updated_image_path}")
305
+ return updated_image_path
306
+
307
+
308
+ class Image2Hed:
309
+ def __init__(self, device):
310
+ print("Initializing Image2Hed")
311
+ self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet')
312
+
313
+ @prompts(name="Hed Detection On Image",
314
+ description="useful when you want to detect the soft hed boundary of the image. "
315
+ "like: detect the soft hed boundary of this image, or hed boundary detection on image, "
316
+ "or perform hed boundary detection on this image, or detect soft hed boundary image of this image. "
317
+ "The input to this tool should be a string, representing the image_path")
318
+ def inference(self, inputs):
319
+ image = Image.open(inputs)
320
+ hed = self.detector(image)
321
+ updated_image_path = get_new_image_name(inputs, func_name="hed-boundary")
322
+ hed.save(updated_image_path)
323
+ print(f"\nProcessed Image2Hed, Input Image: {inputs}, Output Hed: {updated_image_path}")
324
+ return updated_image_path
325
+
326
+
327
+ class HedText2Image:
328
+ def __init__(self, device):
329
+ print(f"Initializing HedText2Image to {device}")
330
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
331
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-hed",
332
+ torch_dtype=self.torch_dtype)
333
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
334
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
335
+ torch_dtype=self.torch_dtype
336
+ )
337
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
338
+ self.pipe.to(device)
339
+ self.seed = -1
340
+ self.a_prompt = 'best quality, extremely detailed'
341
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
342
+ 'fewer digits, cropped, worst quality, low quality'
343
+
344
+ @prompts(name="Generate Image Condition On Soft Hed Boundary Image",
345
+ description="useful when you want to generate a new real image from both the user description "
346
+ "and a soft hed boundary image. "
347
+ "like: generate a real image of a object or something from this soft hed boundary image, "
348
+ "or generate a new real image of a object or something from this hed boundary. "
349
+ "The input to this tool should be a comma separated string of two, "
350
+ "representing the image_path and the user description")
351
+ def inference(self, inputs):
352
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
353
+ image = Image.open(image_path)
354
+ self.seed = random.randint(0, 65535)
355
+ seed_everything(self.seed)
356
+ prompt = f'{instruct_text}, {self.a_prompt}'
357
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
358
+ guidance_scale=9.0).images[0]
359
+ updated_image_path = get_new_image_name(image_path, func_name="hed2image")
360
+ image.save(updated_image_path)
361
+ print(f"\nProcessed HedText2Image, Input Hed: {image_path}, Input Text: {instruct_text}, "
362
+ f"Output Image: {updated_image_path}")
363
+ return updated_image_path
364
+
365
+
366
+ class Image2Scribble:
367
+ def __init__(self, device):
368
+ print("Initializing Image2Scribble")
369
+ self.detector = HEDdetector.from_pretrained('lllyasviel/ControlNet')
370
+
371
+ @prompts(name="Sketch Detection On Image",
372
+ description="useful when you want to generate a scribble of the image. "
373
+ "like: generate a scribble of this image, or generate a sketch from this image, "
374
+ "detect the sketch from this image. "
375
+ "The input to this tool should be a string, representing the image_path")
376
+ def inference(self, inputs):
377
+ image = Image.open(inputs)
378
+ scribble = self.detector(image, scribble=True)
379
+ updated_image_path = get_new_image_name(inputs, func_name="scribble")
380
+ scribble.save(updated_image_path)
381
+ print(f"\nProcessed Image2Scribble, Input Image: {inputs}, Output Scribble: {updated_image_path}")
382
+ return updated_image_path
383
+
384
+
385
+ class ScribbleText2Image:
386
+ def __init__(self, device):
387
+ print(f"Initializing ScribbleText2Image to {device}")
388
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
389
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-scribble",
390
+ torch_dtype=self.torch_dtype)
391
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
392
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
393
+ torch_dtype=self.torch_dtype
394
+ )
395
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
396
+ self.pipe.to(device)
397
+ self.seed = -1
398
+ self.a_prompt = 'best quality, extremely detailed'
399
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
400
+ 'fewer digits, cropped, worst quality, low quality'
401
+
402
+ @prompts(name="Generate Image Condition On Sketch Image",
403
+ description="useful when you want to generate a new real image from both the user description and "
404
+ "a scribble image or a sketch image. "
405
+ "The input to this tool should be a comma separated string of two, "
406
+ "representing the image_path and the user description")
407
+ def inference(self, inputs):
408
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
409
+ image = Image.open(image_path)
410
+ self.seed = random.randint(0, 65535)
411
+ seed_everything(self.seed)
412
+ prompt = f'{instruct_text}, {self.a_prompt}'
413
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
414
+ guidance_scale=9.0).images[0]
415
+ updated_image_path = get_new_image_name(image_path, func_name="scribble2image")
416
+ image.save(updated_image_path)
417
+ print(f"\nProcessed ScribbleText2Image, Input Scribble: {image_path}, Input Text: {instruct_text}, "
418
+ f"Output Image: {updated_image_path}")
419
+ return updated_image_path
420
+
421
+
422
+ class Image2Pose:
423
+ def __init__(self, device):
424
+ print("Initializing Image2Pose")
425
+ self.detector = OpenposeDetector.from_pretrained('lllyasviel/ControlNet')
426
+
427
+ @prompts(name="Pose Detection On Image",
428
+ description="useful when you want to detect the human pose of the image. "
429
+ "like: generate human poses of this image, or generate a pose image from this image. "
430
+ "The input to this tool should be a string, representing the image_path")
431
+ def inference(self, inputs):
432
+ image = Image.open(inputs)
433
+ pose = self.detector(image)
434
+ updated_image_path = get_new_image_name(inputs, func_name="human-pose")
435
+ pose.save(updated_image_path)
436
+ print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}")
437
+ return updated_image_path
438
+
439
+
440
+ class PoseText2Image:
441
+ def __init__(self, device):
442
+ print(f"Initializing PoseText2Image to {device}")
443
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
444
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose",
445
+ torch_dtype=self.torch_dtype)
446
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
447
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
448
+ torch_dtype=self.torch_dtype)
449
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
450
+ self.pipe.to(device)
451
+ self.num_inference_steps = 20
452
+ self.seed = -1
453
+ self.unconditional_guidance_scale = 9.0
454
+ self.a_prompt = 'best quality, extremely detailed'
455
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
456
+ ' fewer digits, cropped, worst quality, low quality'
457
+
458
+ @prompts(name="Generate Image Condition On Pose Image",
459
+ description="useful when you want to generate a new real image from both the user description "
460
+ "and a human pose image. "
461
+ "like: generate a real image of a human from this human pose image, "
462
+ "or generate a new real image of a human from this pose. "
463
+ "The input to this tool should be a comma separated string of two, "
464
+ "representing the image_path and the user description")
465
+ def inference(self, inputs):
466
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
467
+ image = Image.open(image_path)
468
+ self.seed = random.randint(0, 65535)
469
+ seed_everything(self.seed)
470
+ prompt = f'{instruct_text}, {self.a_prompt}'
471
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
472
+ guidance_scale=9.0).images[0]
473
+ updated_image_path = get_new_image_name(image_path, func_name="pose2image")
474
+ image.save(updated_image_path)
475
+ print(f"\nProcessed PoseText2Image, Input Pose: {image_path}, Input Text: {instruct_text}, "
476
+ f"Output Image: {updated_image_path}")
477
+ return updated_image_path
478
+
479
+
480
+ class Image2Seg:
481
+ def __init__(self, device):
482
+ print("Initializing Image2Seg")
483
+ self.image_processor = AutoImageProcessor.from_pretrained("openmmlab/upernet-convnext-small")
484
+ self.image_segmentor = UperNetForSemanticSegmentation.from_pretrained("openmmlab/upernet-convnext-small")
485
+ self.ade_palette = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
486
+ [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
487
+ [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
488
+ [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
489
+ [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
490
+ [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
491
+ [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
492
+ [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
493
+ [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
494
+ [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
495
+ [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
496
+ [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
497
+ [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
498
+ [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
499
+ [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255],
500
+ [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255],
501
+ [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0],
502
+ [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0],
503
+ [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255],
504
+ [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255],
505
+ [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20],
506
+ [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255],
507
+ [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255],
508
+ [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255],
509
+ [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0],
510
+ [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0],
511
+ [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255],
512
+ [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112],
513
+ [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160],
514
+ [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163],
515
+ [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0],
516
+ [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0],
517
+ [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255],
518
+ [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204],
519
+ [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255],
520
+ [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255],
521
+ [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194],
522
+ [102, 255, 0], [92, 0, 255]]
523
+
524
+ @prompts(name="Segmentation On Image",
525
+ description="useful when you want to detect segmentations of the image. "
526
+ "like: segment this image, or generate segmentations on this image, "
527
+ "or perform segmentation on this image. "
528
+ "The input to this tool should be a string, representing the image_path")
529
+ def inference(self, inputs):
530
+ image = Image.open(inputs)
531
+ pixel_values = self.image_processor(image, return_tensors="pt").pixel_values
532
+ with torch.no_grad():
533
+ outputs = self.image_segmentor(pixel_values)
534
+ seg = self.image_processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
535
+ color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) # height, width, 3
536
+ palette = np.array(self.ade_palette)
537
+ for label, color in enumerate(palette):
538
+ color_seg[seg == label, :] = color
539
+ color_seg = color_seg.astype(np.uint8)
540
+ segmentation = Image.fromarray(color_seg)
541
+ updated_image_path = get_new_image_name(inputs, func_name="segmentation")
542
+ segmentation.save(updated_image_path)
543
+ print(f"\nProcessed Image2Pose, Input Image: {inputs}, Output Pose: {updated_image_path}")
544
+ return updated_image_path
545
+
546
+
547
+ class SegText2Image:
548
+ def __init__(self, device):
549
+ print(f"Initializing SegText2Image to {device}")
550
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
551
+ self.controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-seg",
552
+ torch_dtype=self.torch_dtype)
553
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
554
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
555
+ torch_dtype=self.torch_dtype)
556
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
557
+ self.pipe.to(device)
558
+ self.seed = -1
559
+ self.a_prompt = 'best quality, extremely detailed'
560
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
561
+ ' fewer digits, cropped, worst quality, low quality'
562
+
563
+ @prompts(name="Generate Image Condition On Segmentations",
564
+ description="useful when you want to generate a new real image from both the user description and segmentations. "
565
+ "like: generate a real image of a object or something from this segmentation image, "
566
+ "or generate a new real image of a object or something from these segmentations. "
567
+ "The input to this tool should be a comma separated string of two, "
568
+ "representing the image_path and the user description")
569
+ def inference(self, inputs):
570
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
571
+ image = Image.open(image_path)
572
+ self.seed = random.randint(0, 65535)
573
+ seed_everything(self.seed)
574
+ prompt = f'{instruct_text}, {self.a_prompt}'
575
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
576
+ guidance_scale=9.0).images[0]
577
+ updated_image_path = get_new_image_name(image_path, func_name="segment2image")
578
+ image.save(updated_image_path)
579
+ print(f"\nProcessed SegText2Image, Input Seg: {image_path}, Input Text: {instruct_text}, "
580
+ f"Output Image: {updated_image_path}")
581
+ return updated_image_path
582
+
583
+
584
+ class Image2Depth:
585
+ def __init__(self, device):
586
+ print("Initializing Image2Depth")
587
+ self.depth_estimator = pipeline('depth-estimation')
588
+
589
+ @prompts(name="Predict Depth On Image",
590
+ description="useful when you want to detect depth of the image. like: generate the depth from this image, "
591
+ "or detect the depth map on this image, or predict the depth for this image. "
592
+ "The input to this tool should be a string, representing the image_path")
593
+ def inference(self, inputs):
594
+ image = Image.open(inputs)
595
+ depth = self.depth_estimator(image)['depth']
596
+ depth = np.array(depth)
597
+ depth = depth[:, :, None]
598
+ depth = np.concatenate([depth, depth, depth], axis=2)
599
+ depth = Image.fromarray(depth)
600
+ updated_image_path = get_new_image_name(inputs, func_name="depth")
601
+ depth.save(updated_image_path)
602
+ print(f"\nProcessed Image2Depth, Input Image: {inputs}, Output Depth: {updated_image_path}")
603
+ return updated_image_path
604
+
605
+
606
+ class DepthText2Image:
607
+ def __init__(self, device):
608
+ print(f"Initializing DepthText2Image to {device}")
609
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
610
+ self.controlnet = ControlNetModel.from_pretrained(
611
+ "fusing/stable-diffusion-v1-5-controlnet-depth", torch_dtype=self.torch_dtype)
612
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
613
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
614
+ torch_dtype=self.torch_dtype)
615
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
616
+ self.pipe.to(device)
617
+ self.seed = -1
618
+ self.a_prompt = 'best quality, extremely detailed'
619
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
620
+ ' fewer digits, cropped, worst quality, low quality'
621
+
622
+ @prompts(name="Generate Image Condition On Depth",
623
+ description="useful when you want to generate a new real image from both the user description and depth image. "
624
+ "like: generate a real image of a object or something from this depth image, "
625
+ "or generate a new real image of a object or something from the depth map. "
626
+ "The input to this tool should be a comma separated string of two, "
627
+ "representing the image_path and the user description")
628
+ def inference(self, inputs):
629
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
630
+ image = Image.open(image_path)
631
+ self.seed = random.randint(0, 65535)
632
+ seed_everything(self.seed)
633
+ prompt = f'{instruct_text}, {self.a_prompt}'
634
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
635
+ guidance_scale=9.0).images[0]
636
+ updated_image_path = get_new_image_name(image_path, func_name="depth2image")
637
+ image.save(updated_image_path)
638
+ print(f"\nProcessed DepthText2Image, Input Depth: {image_path}, Input Text: {instruct_text}, "
639
+ f"Output Image: {updated_image_path}")
640
+ return updated_image_path
641
+
642
+
643
+ class Image2Normal:
644
+ def __init__(self, device):
645
+ print("Initializing Image2Normal")
646
+ self.depth_estimator = pipeline("depth-estimation", model="Intel/dpt-hybrid-midas")
647
+ self.bg_threhold = 0.4
648
+
649
+ @prompts(name="Predict Normal Map On Image",
650
+ description="useful when you want to detect norm map of the image. "
651
+ "like: generate normal map from this image, or predict normal map of this image. "
652
+ "The input to this tool should be a string, representing the image_path")
653
+ def inference(self, inputs):
654
+ image = Image.open(inputs)
655
+ original_size = image.size
656
+ image = self.depth_estimator(image)['predicted_depth'][0]
657
+ image = image.numpy()
658
+ image_depth = image.copy()
659
+ image_depth -= np.min(image_depth)
660
+ image_depth /= np.max(image_depth)
661
+ x = cv2.Sobel(image, cv2.CV_32F, 1, 0, ksize=3)
662
+ x[image_depth < self.bg_threhold] = 0
663
+ y = cv2.Sobel(image, cv2.CV_32F, 0, 1, ksize=3)
664
+ y[image_depth < self.bg_threhold] = 0
665
+ z = np.ones_like(x) * np.pi * 2.0
666
+ image = np.stack([x, y, z], axis=2)
667
+ image /= np.sum(image ** 2.0, axis=2, keepdims=True) ** 0.5
668
+ image = (image * 127.5 + 127.5).clip(0, 255).astype(np.uint8)
669
+ image = Image.fromarray(image)
670
+ image = image.resize(original_size)
671
+ updated_image_path = get_new_image_name(inputs, func_name="normal-map")
672
+ image.save(updated_image_path)
673
+ print(f"\nProcessed Image2Normal, Input Image: {inputs}, Output Depth: {updated_image_path}")
674
+ return updated_image_path
675
+
676
+
677
+ class NormalText2Image:
678
+ def __init__(self, device):
679
+ print(f"Initializing NormalText2Image to {device}")
680
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
681
+ self.controlnet = ControlNetModel.from_pretrained(
682
+ "fusing/stable-diffusion-v1-5-controlnet-normal", torch_dtype=self.torch_dtype)
683
+ self.pipe = StableDiffusionControlNetPipeline.from_pretrained(
684
+ "runwayml/stable-diffusion-v1-5", controlnet=self.controlnet, safety_checker=None,
685
+ torch_dtype=self.torch_dtype)
686
+ self.pipe.scheduler = UniPCMultistepScheduler.from_config(self.pipe.scheduler.config)
687
+ self.pipe.to(device)
688
+ self.seed = -1
689
+ self.a_prompt = 'best quality, extremely detailed'
690
+ self.n_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit,' \
691
+ ' fewer digits, cropped, worst quality, low quality'
692
+
693
+ @prompts(name="Generate Image Condition On Normal Map",
694
+ description="useful when you want to generate a new real image from both the user description and normal map. "
695
+ "like: generate a real image of a object or something from this normal map, "
696
+ "or generate a new real image of a object or something from the normal map. "
697
+ "The input to this tool should be a comma separated string of two, "
698
+ "representing the image_path and the user description")
699
+ def inference(self, inputs):
700
+ image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
701
+ image = Image.open(image_path)
702
+ self.seed = random.randint(0, 65535)
703
+ seed_everything(self.seed)
704
+ prompt = f'{instruct_text}, {self.a_prompt}'
705
+ image = self.pipe(prompt, image, num_inference_steps=20, eta=0.0, negative_prompt=self.n_prompt,
706
+ guidance_scale=9.0).images[0]
707
+ updated_image_path = get_new_image_name(image_path, func_name="normal2image")
708
+ image.save(updated_image_path)
709
+ print(f"\nProcessed NormalText2Image, Input Normal: {image_path}, Input Text: {instruct_text}, "
710
+ f"Output Image: {updated_image_path}")
711
+ return updated_image_path
712
+
713
+
714
+ class VisualQuestionAnswering:
715
+ def __init__(self, device):
716
+ print(f"Initializing VisualQuestionAnswering to {device}")
717
+ self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32
718
+ self.device = device
719
+ self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
720
+ self.model = BlipForQuestionAnswering.from_pretrained(
721
+ "Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype).to(self.device)
722
+
723
+ @prompts(name="Answer Question About The Image",
724
+ description="useful when you need an answer for a question based on an image. "
725
+ "like: what is the background color of the last image, how many cats in this figure, what is in this figure. "
726
+ "The input to this tool should be a comma separated string of two, representing the image_path and the question")
727
+ def inference(self, inputs):
728
+ image_path, question = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
729
+ raw_image = Image.open(image_path).convert('RGB')
730
+ inputs = self.processor(raw_image, question, return_tensors="pt").to(self.device, self.torch_dtype)
731
+ out = self.model.generate(**inputs)
732
+ answer = self.processor.decode(out[0], skip_special_tokens=True)
733
+ print(f"\nProcessed VisualQuestionAnswering, Input Image: {image_path}, Input Question: {question}, "
734
+ f"Output Answer: {answer}")
735
+ return answer