Papers
arxiv:1912.01865

StarGAN v2: Diverse Image Synthesis for Multiple Domains

Published on Dec 4, 2019
Authors:
,
,
,

Abstract

A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) <PRE_TAG>diversity</POST_TAG> of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited <PRE_TAG>diversity</POST_TAG> or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, <PRE_TAG>diversity</POST_TAG>, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at https://github.com/clovaai/stargan-v2.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1912.01865 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1912.01865 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.