import gradio as gr from io import BytesIO import requests import PIL from PIL import Image import numpy as np import os import uuid import torch from torch import autocast import cv2 from matplotlib import pyplot as plt from inpainting import StableDiffusionInpaintingPipeline from torchvision import transforms from clipseg.models.clipseg import CLIPDensePredT auth_token = os.environ.get("API_TOKEN") or True def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") #device = "cuda" if torch.cuda.is_available() else "cpu" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print("The model will be running on :: ", device, " ~device") pipe = StableDiffusionInpaintingPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", #revision="fp16", torch_dtype=torch.float16, use_auth_token=auth_token, ).to(device) #model = CLIPDensePredT(version='ViT-B/16', reduce_dim=64) model = CLIPDensePredT(version='ViT-B/16', reduce_dim=64, complex_trans_conv=True) model = model.to(torch.device(device)) model.eval() model.load_state_dict(torch.load('./clipseg/weights/rd64-uni.pth', map_location=torch.device(device)), strict=False) print ("Torch load(model) : ", model) imgRes = 256 #512 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), transforms.Resize((imgRes, imgRes)), ]) def predict(radio, dict, word_mask, prompt=""): if(radio == "draw a mask above"): #with autocast("cuda"): #with autocast(device): #enable=(False if device=='cpu' else True)): #with autocast(enabled=True, dtype=torch.bfloat16): with torch.cuda.amp.autocast(True): init_image = dict["image"].convert("RGB").resize((imgRes, imgRes)) mask = dict["mask"].convert("RGB").resize((imgRes, imgRes)) else: img = transform(dict["image"]).unsqueeze(0) word_masks = [word_mask] with torch.no_grad(): preds = model(img.repeat(len(word_masks),1,1,1), word_masks)[0] init_image = dict['image'].convert('RGB').resize((imgRes, imgRes)) filename = f"{uuid.uuid4()}.png" plt.imsave(filename,torch.sigmoid(preds[0][0])) img2 = cv2.imread(filename) gray_image = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) (thresh, bw_image) = cv2.threshold(gray_image, 100, 255, cv2.THRESH_BINARY) cv2.cvtColor(bw_image, cv2.COLOR_BGR2RGB) mask = Image.fromarray(np.uint8(bw_image)).convert('RGB') os.remove(filename) #with autocast("cuda"): #with autocast(device): #enable=(False if device=='cpu' else True)): #with autocast(enabled=True, dtype=torch.bfloat16): with torch.cuda.amp.autocast(True): images = pipe(prompt = prompt, init_image=init_image, mask_image=mask, strength=0.8)["sample"] return images[0] # examples = [[dict(image="init_image.png", mask="mask_image.png"), "A panda sitting on a bench"]] css = ''' .container {max-width: 1150px;margin: auto;padding-top: 1.5rem} #image_upload{min-height:400px} #image_upload [data-testid="image"], #image_upload [data-testid="image"] > div{min-height: 400px} #mask_radio .gr-form{background:transparent; border: none} #word_mask{margin-top: .75em !important} #word_mask textarea:disabled{opacity: 0.3} .footer {margin-bottom: 45px;margin-top: 35px;text-align: center;border-bottom: 1px solid #e5e5e5} .footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white} .dark .footer {border-color: #303030} .dark .footer>p {background: #0b0f19} .acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%} #image_upload .touch-none{display: flex} ''' def swap_word_mask(radio_option): if(radio_option == "type what to mask below"): return gr.update(interactive=True, placeholder="A cat") else: return gr.update(interactive=False, placeholder="Disabled") image_blocks = gr.Blocks(css=css) with image_blocks as demo: gr.HTML( """
Inpaint Stable Diffusion by either drawing a mask or typing what to replace