Papers
arxiv:2301.04604

LinkGAN: Linking GAN Latents to Pixels for Controllable Image Synthesis

Published on Jan 11, 2023
Authors:
,
,
,
,
,

Abstract

This work presents an easy-to-use regularizer for GAN training, which helps explicitly link some axes of the latent space to an image region or a semantic category (e.g., sky) in the synthesis. Establishing such a connection facilitates a more convenient local control of GAN generation, where users can alter image content only within a spatial area simply by partially resampling the latent codes. Experimental results confirm four appealing properties of our regularizer, which we call Link<PRE_TAG>GAN</POST_TAG>. (1) Any image region can be linked to the latent space, even if the region is pre-selected before training and fixed for all instances. (2) Two or multiple regions can be independently linked to different latent axes, surprisingly allowing tokenized control of synthesized images. (3) Our regularizer can improve the spatial controllability of both 2D and 3D <PRE_TAG>GAN models</POST_TAG>, barely sacrificing the <PRE_TAG>synthesis performance</POST_TAG>. (4) The models trained with our regularizer are compatible with <PRE_TAG>GAN inversion techniques</POST_TAG> and maintain editability on real images

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2301.04604 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2301.04604 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2301.04604 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.