Papers
arxiv:2112.11641

JoJoGAN: One Shot Face Stylization

Published on Dec 22, 2021

Abstract

A style mapper applies some fixed style to its input images (so, for example, taking faces to cartoons). This paper describes a simple procedure -- JoJoGAN -- to learn a style mapper from a single example of the style. JoJoGAN uses a GAN inversion procedure and <PRE_TAG>StyleGAN</POST_TAG>'s style-mixing property to produce a substantial paired dataset from a single example style. The paired dataset is then used to fine-tune a <PRE_TAG>StyleGAN</POST_TAG>. An image can then be style mapped by GAN-inversion followed by the fine-tuned <PRE_TAG>StyleGAN</POST_TAG>. JoJoGAN needs just one reference and as little as 30 seconds of training time. JoJoGAN can use extreme style references (say, animal faces) successfully. Furthermore, one can control what aspects of the style are used and how much of the style is applied. Qualitative and quantitative evaluation show that JoJoGAN produces high quality high resolution images that vastly outperform the current state-of-the-art.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2112.11641 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2112.11641 in a dataset README.md to link it from this page.

Spaces citing this paper 8

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.