Add diffusers example
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ In addition to the textual input, it receives a `noise_level` as an input parame
|
|
14 |
![Image](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/upscaling/merged-dog.png)
|
15 |
|
16 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt).
|
17 |
-
- Use it with 🧨 diffusers
|
18 |
|
19 |
|
20 |
## Model Details
|
@@ -35,6 +35,42 @@ In addition to the textual input, it receives a `noise_level` as an input parame
|
|
35 |
pages = {10684-10695}
|
36 |
}
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
# Uses
|
39 |
|
40 |
## Direct Use
|
|
|
14 |
![Image](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/upscaling/merged-dog.png)
|
15 |
|
16 |
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt).
|
17 |
+
- Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler#Examples)
|
18 |
|
19 |
|
20 |
## Model Details
|
|
|
35 |
pages = {10684-10695}
|
36 |
}
|
37 |
|
38 |
+
|
39 |
+
## Examples
|
40 |
+
|
41 |
+
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
|
42 |
+
|
43 |
+
```bash
|
44 |
+
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
|
45 |
+
```
|
46 |
+
|
47 |
+
```python
|
48 |
+
import requests
|
49 |
+
from PIL import Image
|
50 |
+
from io import BytesIO
|
51 |
+
from diffusers import StableDiffusionUpscalePipeline
|
52 |
+
import torch
|
53 |
+
|
54 |
+
# load model and scheduler
|
55 |
+
model_id = "stabilityai/stable-diffusion-x4-upscaler"
|
56 |
+
pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, revision="fp16", torch_dtype=torch.float16)
|
57 |
+
pipeline = pipeline.to("cuda")
|
58 |
+
|
59 |
+
# let's download an image
|
60 |
+
url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
|
61 |
+
response = requests.get(url)
|
62 |
+
low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
|
63 |
+
low_res_img = low_res_img.resize((128, 128))
|
64 |
+
|
65 |
+
upscaled_image = pipeline(low_res_img).images[0]
|
66 |
+
upscaled_image.save("upsampled_cat.png")
|
67 |
+
```
|
68 |
+
|
69 |
+
**Notes**:
|
70 |
+
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
|
71 |
+
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
|
72 |
+
|
73 |
+
|
74 |
# Uses
|
75 |
|
76 |
## Direct Use
|