File size: 2,319 Bytes
bdbdbc1
 
5a05364
 
bdbdbc1
 
 
 
 
5a05364
bdbdbc1
5a05364
 
bdbdbc1
 
 
 
 
5a05364
 
bdbdbc1
5a05364
bdbdbc1
5a05364
 
 
 
bdbdbc1
5a05364
bdbdbc1
5a05364
bdbdbc1
5a05364
bdbdbc1
5a05364
 
 
bdbdbc1
5a05364
 
 
 
 
 
 
 
 
bdbdbc1
5a05364
 
bdbdbc1
5a05364
bdbdbc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a05364
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
library_name: diffusers
license: apache-2.0
pipeline_tag: image-to-image
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is [stable-diffusion-2-1-base](https://huggingface.co./stabilityai/stable-diffusion-2-1-base) (v2-1_512-ema-pruned.ckpt).

  * Use it with 🧨 [diffusers](#examples)
  * Use it with [controlnet](https://github.com/lllyasviel/ControlNet/tree/main?tab=readme-ov-file) repository

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [stable-diffusion](https://huggingface.co./stabilityai/stable-diffusion-2-1-base)
- **Paper:** [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543)

## Examples

```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
from PIL import Image

img = Image.open("osm_tile_18_42048_101323.jpeg")

controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM")

scheduler = UniPCMultistepScheduler.from_pretrained("stabilityai/stable-diffusion-2-1-base", subfolder="scheduler")

pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, scheduler=scheduler)
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()

# generate image
generator = torch.manual_seed(10345340)
image = pipe(
    "Satellite image features a city neighborhood",
    num_inference_steps=50,
    generator=generator,
    image=img,
    controlnet_conditioning_scale=1.0,
).images[0]

image.save("generated_city.jpg")
```

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]