DDPM-emoji-64 / README.md
randomani's picture
Update README.md
704e6a1 verified
---
library_name: diffusers
license: apache-2.0
datasets:
- valhalla/emoji-dataset
language:
- en
tags:
- art
---
## Model Details
**Abstract**:
*Trained an Unconditional Diffusion Model on emoji dataset with DDPM noise scheduler *
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/DDPM-emoji-64"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
## Samples Generated
1. ![sample_1](https://huggingface.co./randomani/DDPM-emoji-64/blob/main/1.png)
2. ![sample_2](https://huggingface.co./randomani/DDPM-emoji-64/blob/main/2.png)
3. ![sample_3](https://huggingface.co./randomani/DDPM-emoji-64/blob/main/3.png)
4. ![sample_4](https://huggingface.co./randomani/DDPM-emoji-64/blob/main/4.png)