patrickvonplaten commited on
Commit
9445050
·
2 Parent(s): 62fc896 35016b9

Merge branch 'main' of https://huggingface.co./diffusers/tools into main

Browse files
Files changed (2) hide show
  1. README.md +98 -0
  2. aa_orig_comp (6).png +0 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Diffusers Tools
2
+
3
+ This is a collection of scripts that can be useful for various tasks related to the [diffusers library](https://github.com/huggingface/diffusers)
4
+
5
+ ## 1. Test against original checkpoints
6
+
7
+ **It's very important to have visually the exact same results as the original code bases.!**
8
+
9
+ E.g. to make use `diffusers` is identical to the original [CompVis codebase](https://github.com/CompVis/stable-diffusion), you can run the following script in the original CompVis codebase:
10
+
11
+ 1. Download the original [SD-1-4 checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4) and put it in the correct folder following the instructions on: https://github.com/CompVis/stable-diffusion
12
+
13
+ 2. Run the following command
14
+ ```
15
+ python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --seed 0 --n_samples 1 --n_rows 1 --n_iter 1
16
+ ```
17
+
18
+ and compare this to the same command in diffusers:
19
+
20
+ ```python
21
+ from diffusers import DiffusionPipeline, StableDiffusionPipeline, DDIMScheduler
22
+ import torch
23
+
24
+ # python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --seed 0 --n_samples 1 --n_rows 1 --n_iter 1
25
+ seed = 0
26
+
27
+ prompt = "a photograph of an astronaut riding a horse"
28
+ pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
29
+
30
+ pipe = pipe.to("cuda")
31
+ pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
32
+ torch.manual_seed(0)
33
+ image = pipe(prompt, num_inference_steps=50).images[0]
34
+
35
+ image.save("/home/patrick_huggingface_co/images/aa_comp.png")
36
+ ```
37
+
38
+ Both commands should give the following image on a V100:
39
+
40
+
41
+ ## 2. Test against [k-diffusion](https://github.com/crowsonkb/k-diffusion):
42
+
43
+ You can run the following script to compare against k-diffusion.
44
+
45
+ See results [here](https://huggingface.co/datasets/patrickvonplaten/images)
46
+
47
+ ```python
48
+ from diffusers import StableDiffusionKDiffusionPipeline, HeunDiscreteScheduler, StableDiffusionPipeline, DPMSolverMultistepScheduler, EulerDiscreteScheduler, LMSDiscreteScheduler
49
+ import torch
50
+ import os
51
+
52
+ seed = 13
53
+ inference_steps = 25
54
+ #checkpoint = "CompVis/stable-diffusion-v1-4"
55
+ checkpoint = "stabilityai/stable-diffusion-2-1"
56
+ prompts = ["astronaut riding horse", "whale falling from sky", "magical forest", "highly photorealistic picture of johnny depp"]
57
+ prompts = 8 * ["highly photorealistic picture of johnny depp"]
58
+ #prompts = prompts[:1]
59
+ samplers = ["sample_dpmpp_2m", "sample_euler", "sample_heun", "sample_dpm_2", "sample_lms"]
60
+ #samplers = samplers[:1]
61
+
62
+ pipe = StableDiffusionKDiffusionPipeline.from_pretrained(checkpoint, torch_dtype=torch.float16, safety_checker=None)
63
+ pipe = pipe.to("cuda")
64
+
65
+ for i, prompt in enumerate(prompts):
66
+ prompt_f = f"{'_'.join(prompt.split())}_{i}"
67
+ for sampler in samplers:
68
+ pipe.set_scheduler(sampler)
69
+ torch.manual_seed(seed + i)
70
+ image = pipe(prompt, num_inference_steps=inference_steps).images[0]
71
+ checkpoint_f = f"{'--'.join(checkpoint.split('/'))}"
72
+ os.makedirs(f"/home/patrick_huggingface_co/images/{checkpoint_f}", exist_ok=True)
73
+ os.makedirs(f"/home/patrick_huggingface_co/images/{checkpoint_f}/{sampler}", exist_ok=True)
74
+ image.save(f"/home/patrick_huggingface_co/images/{checkpoint_f}/{sampler}/{prompt_f}.png")
75
+
76
+
77
+ pipe = StableDiffusionPipeline(**pipe.components)
78
+ pipe = pipe.to("cuda")
79
+
80
+ for i, prompt in enumerate(prompts):
81
+ prompt_f = f"{'_'.join(prompt.split())}_{i}"
82
+ for sampler in samplers:
83
+ if sampler == "sample_euler":
84
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
85
+ elif sampler == "sample_heun":
86
+ pipe.scheduler = HeunDiscreteScheduler.from_config(pipe.scheduler.config)
87
+ elif sampler == "sample_dpmpp_2m":
88
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
89
+ elif sampler == "sample_lms":
90
+ pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
91
+
92
+ torch.manual_seed(seed + i)
93
+ image = pipe(prompt, num_inference_steps=inference_steps).images[0]
94
+ checkpoint_f = f"{'--'.join(checkpoint.split('/'))}"
95
+ os.makedirs("/home/patrick_huggingface_co/images/{checkpoint_f}", exist_ok=True)
96
+ os.makedirs(f"/home/patrick_huggingface_co/images/{checkpoint_f}/{sampler}", exist_ok=True)
97
+ image.save(f"/home/patrick_huggingface_co/images/{checkpoint_f}/{sampler}/{prompt_f}_hf.png")
98
+ ```
aa_orig_comp (6).png ADDED