StarGAN v2: Diverse Image Synthesis for Multiple Domains
Abstract
A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) <PRE_TAG>diversity</POST_TAG> of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited <PRE_TAG>diversity</POST_TAG> or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, <PRE_TAG>diversity</POST_TAG>, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at https://github.com/clovaai/stargan-v2.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper