Text-to-Image
Diffusers
English
RedAIGC commited on
Commit
356ccff
β€’
1 Parent(s): b631863

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -3,4 +3,117 @@ license: apache-2.0
3
  language:
4
  - en
5
  library_name: diffusers
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - en
5
  library_name: diffusers
6
+ ---
7
+ <div align="center">
8
+ <h1>StoryMaker: Towards consistent characters in text-to-image generation</h1>
9
+
10
+ <img src='https://img.shields.io/badge/Technique-Report-red'></a>
11
+ <a href='https://huggingface.co/RED-AIGC/StoryMaker'><img src='https://img.shields.io/static/v1?label=Paper&message=Huggingface&color=orange'></a>
12
+
13
+ </div>
14
+ StoryMaker is a personalization solution preserves not only the consistency of faces but also clothing, hairstyles and bodies in the multiple characters scene, enabling the potential to make a story consisting of a series of images.
15
+ <p align="center">
16
+ <img src="assets/day1.png">
17
+ Visualization of generated images by StoryMaker. First three rows tell a story about a day in the life of a "office worker" and the last two rows tell a story about a movie of "Before Sunrise".
18
+ </p>
19
+
20
+ ## Demos
21
+
22
+ ### Two Portraits Synthesis
23
+
24
+ <p align="center">
25
+ <img src="assets/two.png">
26
+ </p>
27
+
28
+ ### Diverse application
29
+
30
+ <p align="center">
31
+ <img src="assets/diverse.png">
32
+ </p>
33
+
34
+ ## Download
35
+
36
+ You can directly download the model from [Huggingface](https://huggingface.co/RED-AIGC/StoryMaker).
37
+
38
+ If you cannot access to Huggingface, you can use [hf-mirror](https://hf-mirror.com/) to download models.
39
+ ```python
40
+ export HF_ENDPOINT=https://hf-mirror.com
41
+ huggingface-cli download --resume-download RED-AIGC/StoryMaker --local-dir checkpoints --local-dir-use-symlinks False
42
+ ```
43
+
44
+ For face encoder, you need to manually download via this [URL](https://github.com/deepinsight/insightface/issues/1896#issuecomment-1023867304) to `models/buffalo_l` as the default link is invalid. Once you have prepared all models, the folder tree should be like:
45
+
46
+ ```
47
+ .
48
+ β”œβ”€β”€ models
49
+ β”œβ”€β”€ checkpoints/mask.bin
50
+ β”œβ”€β”€ pipeline_sdxl_storymaker.py
51
+ └── README.md
52
+ ```
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ # !pip install opencv-python transformers accelerate insightface
58
+ import diffusers
59
+
60
+ import cv2
61
+ import torch
62
+ import numpy as np
63
+ from PIL import Image
64
+
65
+ from insightface.app import FaceAnalysis
66
+ from pipeline_sdxl_storymaker import StableDiffusionXLStoryMakerPipeline
67
+
68
+ # prepare 'buffalo_l' under ./models
69
+ app = FaceAnalysis(name='buffalo_l', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
70
+ app.prepare(ctx_id=0, det_size=(640, 640))
71
+
72
+ # prepare models under ./checkpoints
73
+ face_adapter = f'./checkpoints/mask.bin'
74
+ image_encoder_path = 'laion/CLIP-ViT-H-14-laion2B-s32B-b79K' # from https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
75
+
76
+ base_model = 'huaquan/YamerMIX_v11' # from https://huggingface.co/huaquan/YamerMIX_v11
77
+ pipe = StableDiffusionXLStoryMakerPipeline.from_pretrained(
78
+ base_model,
79
+ torch_dtype=torch.float16
80
+ )
81
+ pipe.cuda()
82
+
83
+ # load adapter
84
+ pipe.load_storymaker_adapter(image_encoder_path, face_adapter, scale=0.8, lora_scale=0.8)
85
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
86
+ ```
87
+
88
+ Then, you can customized your own images
89
+
90
+ ```python
91
+ # load an image and mask
92
+ face_image = Image.open("examples/ldh.png").convert('RGB')
93
+ mask_image = Image.open("examples/ldh_mask.png").convert('RGB')
94
+
95
+ face_info = app.get(cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR))
96
+ face_info = sorted(face_info, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # only use the maximum face
97
+
98
+ prompt = "a person is taking a selfie, the person is wearing a red hat, and a volcano is in the distance"
99
+ n_prompt = "bad quality, NSFW, low quality, ugly, disfigured, deformed"
100
+
101
+ generator = torch.Generator(device='cuda').manual_seed(666)
102
+ for i in range(4):
103
+ output = pipe(
104
+ image=image, mask_image=mask_image, face_info=face_info,
105
+ prompt=prompt,
106
+ negative_prompt=n_prompt,
107
+ ip_adapter_scale=0.8, lora_scale=0.8,
108
+ num_inference_steps=25,
109
+ guidance_scale=7.5,
110
+ height=1280, width=960,
111
+ generator=generator,
112
+ ).images[0]
113
+ output.save(f'examples/results/ldh666_new_{i}.jpg')
114
+ ```
115
+
116
+
117
+ ## Acknowledgements
118
+ - Our work is highly inspired by [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter) and [InstantID](https://github.com/instantX-research/InstantID). Thanks for their great works!
119
+ - Thanks [Yamer](https://civitai.com/user/Yamer) for developing [YamerMIX](https://civitai.com/models/84040?modelVersionId=309729), we use it as base model in our demo.