DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation - Bedrooms
Creators: Gwanghyun Kim, Taesung Kwon, Jong Chul Ye Paper: https://arxiv.org/abs/2110.02711

DiffusionCLIP is a diffusion model which is well suited for image manipulation thanks to its nearly perfect inversion capability, which is an important advantage over GAN-based models. This checkpoint was trained on the "Bedrooms" category of the LSUN Dataset.
This checkpoint is most appropriate for manipulation, reconstruction, and style transfer on images of indoor locations, such as bedrooms. The weights should be loaded into the DiffusionCLIP model.
Credits
- Code repository available at: https://github.com/gwang-kim/DiffusionCLIP
Citation
@article{kim2021diffusionclip,
title={Diffusionclip: Text-guided image manipulation using diffusion models},
author={Kim, Gwanghyun and Ye, Jong Chul},
journal={arXiv preprint arXiv:2110.02711},
year={2021}
}
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support image-to-image models for pytorch library.