arxiv_dump / txt /2107.04240.txt
billxbf's picture
Upload 101 files
8f1929a verified
raw
history blame
123 kB
Deep Image Synthesis from Intuitive User Input: A Review and
Perspectives
Yuan Xue1, Yuan-Chen Guo2, Han Zhang3,
Tao Xu4, Song-Hai Zhang2, Xiaolei Huang1
1The Pennsylvania State University, University Park, PA, USA
2Tsinghua University, Beijing, China
3Google Brain, Mountain View, CA, USA
4Facebook, Menlo Park, CA, USA
Abstract
In many applications of computer graphics, art and
design, it is desirable for a user to provide intuitive
non-image input, such as text, sketch, stroke, graph
or layout, and have a computer system automatically
generate photo-realistic images that adhere to the
input content. While classic works that allow such
automatic image content generation have followed
a framework of image retrieval and composition,
recent advances in deep generative models such as
generative adversarial networks (GANs), variational
autoencoders (VAEs), and ow-based methods have
enabled more powerful and versatile image generation
tasks. This paper reviews recent works for image
synthesis given intuitive user input, covering advances
in input versatility, image generation methodology,
benchmark datasets, and evaluation metrics. This
motivates new perspectives on input representation and
interactivity, cross pollination between major image
generation paradigms, and evaluation and comparison
of generation methods.
Keywords: Image Synthesis, Intuitive User Input,
Deep Generative Models, Synthesized Image Quality
Evaluation
1 Introduction
Machine learning and arti cial intelligence have given
computers the abilities to mimic or even defeat humans
in tasks like playing chess and Go games, recognizing
objects from images, translating from one language to
another. An interesting next pursuit would be: can
computers mimic creative processes such as mimicking
painters in making pictures, assisting artists or archi-
tects in making artistic or architectural designs? In
fact, in the past decade, we have witnessed advances insystems that synthesize an image from text description
[143, 98, 152, 142] or from learned style constant [50],
paint a picture given a sketch [106, 27, 25, 73], ren-
der a photorealistic scene from a wireframe [61, 134],
create virtual reality content from images and videos
[121], among others. A comprehensive review of such
systems can inform about the current state-of-the-art
in such pursuits, reveal open challenges and illuminate
future directions. In this paper, we make an attempt
at a comprehensive review of image synthesis and ren-
dering techniques given simple, intuitive user inputs
such as text, sketches or strokes, semantic label maps,
poses, visual attributes, graphs and layouts. We rst
present ideas on what makes a good paradigm for image
synthesis from intuitive user input and review popular
metrics for evaluating the quality of generated images.
We then introduce several mainstream methodologies
for image synthesis given user inputs, and review al-
gorithms developed for application scenarios speci c to
di erent formats of user inputs. We also summarize ma-
jor benchmark datasets used by current methods, and
advances and trends in image synthesis methodology.
Last, we provide our perspective on future directions
towards developing image synthesis models capable of
generating complex images that are closely aligned with
user input condition, have high visual realism, and ad-
here to constraints of the physical world.
2 What Makes a Good Paradigm
for Image Synthesis from Intu-
itive User Input?
2.1 What Types of User Input Do We
Need?
For an image synthesis model to be user-friendly and
applicable in real-world applications, user inputs that
1arXiv:2107.04240v2 [cs.CV] 30 Sep 2021are intuitive, easy for interactive editing, and commonly
used in the design and creation processes are desired.
We de ne an input modality to be intuitive if it has the
following characteristics:
•Accessibility. The input should be easy to access,
especially for non-professionals. Take sketch for an
example, even people without any trained skills in
drawing can express rough ideas through sketching.
•Expressiveness. The input should be expressive
enough to allow someone to convey not only simple
concepts but also complex ideas.
•Interactivity. The input should be interactive to
some extent, so that users can modify the input
content interactively and ne tune the synthesized
output in an iterative fashion.
Taking painting as an example, a sketch is an intu-
itive input because it is what humans use to design the
composition of the painting. On the other hand, being
intuitive often means that the information provided by
the input is limited, which makes the generation task
more challenging. Moreover, for di erent types of ap-
plications, the suitable forms of user input can be quite
di erent.
For image synthesis with intuitive user input, the
most relevant and well-investigated method is with con-
ditional image generation models. In other words, user
inputs are treated as conditional input to the synthesis
model to guide the generation process by conditional
generative models. In this review, we will mainly dis-
cuss mainstream conditional image generation applica-
tions including those using text descriptions, sketches
or strokes, semantic maps, poses, visual attributes, or
graphs as intuitive input. The processing and rep-
resentation of user input are usually application- and
modality-dependent. When given text descriptions as
input, pretrained text embeddings are often used to
convert text into a vector-representation of input words.
Image-like inputs, such as sketches, semantic maps and
poses, are often represented as images and processed ac-
cordingly. In particular, one-hot encoding can be used
in semantic maps to represent di erent categories, and
keypoint maps can be used to encode poses where each
channel represents the position of a body keypoint; both
result in multi-channel image-like tensors as input. Us-
ing visual attributes as input is most similar to general
conditional generation tasks, where attributes can be
provided in the form of class vectors. For graph-like
user inputs, additional processing steps are required
to extract relationship information represented in the
graphs. For instance, graph convolutional networks
(GCNs) [53] can be applied to extract node features
from input graphs. More details of the processing andrepresentation methods of various input types will be
reviewed and discussed in Sec. 4.
2.2 How Do We Evaluate the Output
Synthesized Images?
The goodness of an image synthesis method depends on
how well its output adheres to user input, whether the
output is photorealistic or structurally coherent, and
whether it can generate a diverse pool of images that
satisfy requirements. There have been general metrics
designed for evaluating the quality and sometimes di-
versity of synthesized images. Widely adopted metrics
use di erent methods to extract features from images
then calculate di erent scores or distances. Such met-
rics include Peak Signal-to-Noise Ratio (PSNR), Incep-
tion Score (IS), Fr echet inception distance (FID), struc-
tural similarity index measure (SSIM) and Learned Per-
ceptual Image Patch Similarity (LPIPS).
Peak Signal-to-Noise Ratio (PSNR) measures the
physical quality of a signal by the ratio between the
maximum possible power of the signal and the power of
the noise a ecting it. For images, PSNR can be repre-
sented as
PSNR =1
3X
k10 log10max DR2
1
mP
i;j(ti;j;kyi;j;k)2(1)
wherekis the number of channels, DR is the dynamic
range of the image (255 for 8-bit images), mis the num-
ber of pixels, i;jare indices iterating over every pixel,
tandyare the reference image and synthesized image
respectively.
The Inception Score (IS) [103] uses a pre-trained In-
ception [112] network to compute the KL-divergence
between the conditional class distribution and the
marginal class distribution. The inception score is de-
ned as
IS = exp( ExKL(P(yjx)jjP(y))); (2)
wherexis an input image and yis the label predicted
by an Inception model. A high inception score indicates
that the generated images are diverse and semantically
meaningful.
Fr echet Inception Distance (FID) [34] is a popular
evaluation metric for image synthesis tasks, especially
for Generative Adversarial network (GAN) based mod-
els. It computes the divergence between the synthetic
data distribution and the real data distribution:
FID =jj^mmjj2
2+ Tr( ^C+C2(C^C)1=2); (3)
wherem;C and ^m;^Crepresent the mean and covari-
ance of the feature embeddings of the real and the syn-
thetic distributions, respectively. The feature embed-
ding is extracted from a pre-trained Inception-v3 [112]
model.
2Structural Similarity Index Measure (SSIM) [126]
or multi-scale structural similarity (MS-SSIM) met-
ric [127] gives a relative similarity score to an image
against a reference one, which is di erent from absolute
measures like PSNR. The SSIM is de ned as:
SSIM(x;y) =(2xy+c1) (2xy+c2)
2x+2y+c1
2x+2y+c2;(4)
whereandindicate the average and variance of two
windowsxandy,c1andc2are two variables to sta-
bilize the division with weak denominator. The SSIM
measures perceived image quality considering structural
information. It tests pair-wise similarity between gen-
erated images, where a lower score indicates higher di-
versity of generated images (i.e. less mode collapses).
Another metric based on features extracted from pre-
trained CNN networks is the Learned Perceptual Image
Patch Similarity (LPIPS) score [145]. The distance is
calculated as
d(x;x0) =X
l1
HlWlX
h;w wl
^yl
hw^yl
0hw 2
2;(5)
where ^yl;^yl
02RHlWlClare unit-normalized feature
stack from the l-th layer in a pre-trained CNN and wlin-
dicates channel-wise weights. LPIPS evaluates percep-
tual similarity between image patches using the learned
deep features from trained neural networks.
For ow based models [102, 52] and autoregres-
sive models [118, 117, 104], the average negative log-
likelihood ( i.e., bits per dimension) [118] is often used
to evaluate the quality of generated images. It is cal-
culated as the negative log-likelihood with log base 2
divided by the number of pixels, which is interpretable
as the number of bits that a compression scheme based
on this model would need to compress every RGB color
value [118].
Except for metrics designed for general purposes, spe-
ci c evaluation metrics have been proposed for di er-
ent applications with various input types. For instance,
using text descriptions as input, R-precision [133] eval-
uates whether a generated image is well conditioned on
the given text description. The R-precision is measured
by retrieving relevant text given an image query. For
sketch-based image synthesis, classi cation accuracy is
used to measure the realism of the synthesized objects
[27, 25] and how well the identities of synthesized re-
sults match those of real images [77]. Also, similarity
between input sketches and edges of synthesized images
can be measured to evaluate the correspondence be-
tween the input and output [25]. In the scenario of pose-
guided person image synthesis, \masked" versions of IS
and SSIM, Mask-IS and Mask-SSIM are often used to
ignore the e ects of background [79, 80, 107, 111, 154],
since we only want to focus on the synthesized humanbody. Similar to sketch-based synthesis, detection score
(DS) is used to evaluate how well the synthesized person
can be detected [107, 154] and keypoint accuracy can
be used to measure the level of correspondence between
keypoints [154]. For semantic maps, a commonly used
metric tries to restore the semantic-map input from gen-
erated images using a pre-trained segmentation network
and then compares the restored semantic map with the
original input by Intersection over Union (IoU) score or
other segmentation accuracy measures. Similarly, using
visual attributes as input, a pre-trained attribute clas-
si er or regressor can be used to assess the attribute
correctness of generated images.
3 Overview of Mainstream
Conditional Image Synthe-
sis Paradigms
Image synthesis models with intuitive user inputs of-
ten involve di erent types of generative models, more
speci cally, conditional generative models that treat
user input as observed conditioning variable. Two ma-
jor goals of the synthesis process are high realism of
the synthesized images, and correct correspondences be-
tween input conditions and output images. In existing
literature, methods vary from more traditional retrieval
and composition based methods to more recent deep
learning based algorithms. In this section, we give an
overview of the architectures and main components of
di erent conditional image synthesis models.
3.1 Retrieval and Composition
Traditional image synthesis techniques mainly take a
retrieval and composition paradigm. In the retrieval
stage, candidate images / image fragments are fetched
from a large image collection, under some user-provided
constraints, like texts, sketches and semantic label
maps. Methods like edge extraction, saliency detec-
tion, object detection and semantic segmentation are
used to pre-process images in the collection according
to di erent input modalities and generation purposes,
after which the retrieval can be performed using shal-
low image features like HoG and Shape Context [5].
The user may interact with the system to improve the
quality of the retrieved candidates. In the composition
stage, the selected images or image fragments are com-
bined by Poisson Blending, Alpha blending, or a hybrid
of both [15], resulting in the nal output image.
The biggest advantage of synthesizing images
through retrieval and composition is its controllability
and interpretability. The user can simply intervene with
the generation process in any stage, and easily nd out
whether the output image looks like the way it should
3be. But it can not generate instances that do not appear
in the collection, which restricts the range and diversity
of the output.
3.2 Conditional Generative Adversarial
Networks (cGANs)
Generative Adversarial Networks (GANs) [29] have
achieved tremendous success in various image gener-
ation tasks. A GAN model typically consists of two
networks: a generator network that learns to generate
realistic synthetic images and a discriminator network
that learns to di erentiate between real images and syn-
thetic images generated by the generator. The two net-
works are optimized alternatively through adversarial
training. Vanilla GAN models are designed for uncon-
ditional image generation, which implicitly model the
distribution of images. To gain more control over the
generation process, conditional GANs or cGANs [86]
synthesize images based on both a random noise vector
and a condition vector provided by users. The objective
of training cGAN as a minimax game is
min
Gmax
DLcGAN =E(x;y)pdata(x;y)[logD(x;y)] +
Ezp(z);ypdata(y)[log(1D(G(z;y);y)];(6)
wherexis the real image, yis the user input, and zis
the random noise vector. There are di erent ways of
incorporating user input in the discriminator, such as
inserting it at the beginning of the discriminator [86],
middle of the discriminator [88], or the end of the dis-
criminator [91].
3.3 Variational Auto-encoders (VAEs)
Variational auto-encoders (VAEs) proposed in [51] ex-
tend the idea of auto-encoder and introduce variational
inference to approximate the latent representation zen-
coded from the input data x. The encoder converts
xintozin a latent space where the decoder tries to
reconstruct xfromz. Similar to GANs which typ-
ically assume the input noise vector follows a Gaus-
sian distribution, VAEs use variational inference to ap-
proximate the posterior p(zjx) given that p(z) follows a
Gaussian distribution. After the training of VAE, the
decoder is used as a generator, similar to the genera-
tor in GAN, which can draw samples from the latent
space and generate new synthetic data. Based on the
vanilla VAE, Sohn et al. proposed a conditional VAE
(cVAE) [109, 54, 44] which is a conditional directed
graphical model whose input observations modulate the
latent variables that generate the outputs. Similar to
cGANs, cVAEs allow the user to provide guidance to
the image synthesis process via user input. The train-
z
yGen Disො𝑥
yTrue/
False
(a) cGANEncoder
yx Ƹ𝑧
(b) cV AEyො𝑥 DecoderFigure 1: A general illustration of cGAN and cVAE
that can be applied to image synthesis with intuitive
user inputs. During inference, the generator in cGAN
and the decoder in cVAE generate new images ^ xunder
the guidance of user input yand noise vector or latent
variablez.
ing objective for cVAE is
max
;LcVAE =EzQ[logP(xjz;y)]
DKL[Q(zjx;y)kp(zjy)];(7)
wherexis the real image, yis the user input, zis the
latent variable and p(zjx) is the prior distribution of
the latent vectors such as the Gaussian distribution. 
andare parameters of the encoder Qand decoder P
networks, respectively. An illustration of cGAN and
cVAE can be found in Fig. 1.
3.4 Other Learning-based Methods
Other learning-based conditional image synthesis mod-
els include hybrid methods such as the combina-
tion of VAE and GAN models [57, 4], autoregressive
models and normalizing ow-based models. Among
these methods, autoregressive models such as Pixel-
RNN [118], PixelCNN [117], and PixelCNN++ [104]
provide tractable likelihood over priors such as class
conditions. The generation process is similar to an au-
toregression model: while classic autoregression models
predict future information based on past observations,
image autoregressive models synthesize next image pix-
els based on previously generated or existing nearby
pixels.
Flow-based models [102], or normalizing ow based
methods, consist of a sequence of invertible transfor-
mations which can convert a simple distribution (e.g.,
Gaussian) into a more complex one with the same di-
mension. While ow based methods have not been
widely applied to image synthesis with intuitive user
inputs, few works [52] show that they have great po-
tential in visual attributes guided synthesis and may be
applicable to broader scenarios.
Among the aforementioned mainstream paradigms,
traditional retrieval and composition methods have the
advantage of better controllability and interpretability,
although the diversity of synthesized images and the
exibility of the models are limited. In comparison,
deep learning based methods generally have stronger
4feature representation capacity, with GANs having the
potential of generating images with highest quality.
While having been successfully applied to various im-
age synthesis tasks due to their exibility, GAN models
lack tractable and explicit likelihood estimation. On
the contrary, autoregressive models admit a tractable
likelihood estimation, and can assign a probability to a
single sample. VAEs with latent representation learn-
ing provide better feature representation power and can
be more interpretable. Compared with VAEs and au-
toregressive models, normalizing ow methods provide
both feature representation power and tractable likeli-
hood estimation.
4 Methods Speci c to Appli-
cations with Various Input
Types
In this section, we review works in the literature that
target application scenarios with speci c input types.
We will review methods for image synthesis from text
descriptions, sketches and strokes, semantic label maps,
poses, and other input modalities including visual at-
tributes, graphs and layouts. Among the di erent in-
put types, text descriptions are exible, expressive and
user-friendly, yet the comprehension of input content
and responding to interactive editing can be challeng-
ing to the generative models; example applications of
text-to-image systems are computer generated art, im-
age editing, computer-aided design, interactive story
telling and visual chat for education and language learn-
ing. Image-like inputs such as sketches and semantic
maps contain richer information and can better guide
the synthesis process, but may require more e orts from
users to provide adequate input; such inputs can be
used in applications such as image and photo editing,
computer-assisted painting and rendering. Other in-
puts such as visual attributes, graphs and layouts allow
appearance, structural or other constraints to be given
as conditional input and can help guide the generation
of images that preserve the visual properties of objects
and geometric relations between objects; they can be
used in various computer-aided design applications for
architecture, manufacturing, publishing, arts, and fash-
ion.
4.1 Text Description as Input
The task of text-to-image synthesis (Fig. 2) is using
descriptive sentences as inputs to guide the generation
of corresponding images. The generated image types
vary from single-object images [90, 128] to multi-object
images with complex background [72]. Descriptive sen-
tences in a natural language o er a general and exi-ble way of describing visual concepts and objects. As
text is one of the most intuitive types of user input,
text-to-image synthesis has gained much attention from
the research community and numerous e orts have been
made towards developing better text-to-image synthesis
models. In this subsection, we will review state-of-the-
art text-to-image synthesis models and discuss recent
advances.
Learning Correspondence Between Text and Im-
age Representations. One of the major challenges of
the text-to-image synthesis task is that the input text
and output image are in di erent modalities, which re-
quires learning of correspondence between text and im-
age representations. Such multi-modality nature and
the need to learn text-to-image correspondence moti-
vated Reed et al. [100] to rst propose to solve the
task using a GAN model. In [100], the authors pro-
posed to generate images conditioned on the embed-
ding of text descriptions, instead of class labels as in
traditional cGANs [86]. To learn the text embedding
from input sentences, a deep convolutional image en-
coder and a character level convolutional-recurrent text
encoder are trained jointly so that the text encoder can
learn a vector-representation of the input text descrip-
tions. Adapted from the DCGAN architecture [99], the
learned text encoding is then concatenated with both
the input noise vector in the generator and the im-
age features in the discriminator along the depth di-
mension. The method [100] generated encouraging re-
sults on both the Oxford-102 dataset [90] and the CUB
dataset [128], with the limitation that the resolution
of generated images is relatively low (64 64). An-
other work proposed around the same time as DCGAN
is by Mansimov et al. [81], which proposes a combi-
nation of a recurrent variational autoencoder with an
attention model which iteratively draws patches on a
canvas, while attending to the relevant words in the
description. Input text descriptions are represented as
a sequence of consecutive words and images are rep-
resented as a sequence of patches drawn on a canvas.
For image generation which samples from a Gaussian
distribution, the Gaussian mean and variance depend
on the previous hidden states of the generative LSTM.
Experiments by [81] on the MS-COCO dataset show
reasonable results that correspond well to text descrip-
tions.
To further improve the visual quality and realism of
generated images given text descriptions, Han et al.
proposed multi-stage GAN models, StackGAN [143]
and StackGAN++ [144], to enable multi-scale, incre-
mental re nement in the image generation process.
Given text descriptions, StackGAN [143] decomposes
the text-to-image generative process into two stages,
where in Stage-I it captures basic object features and
background layout, then in Stage-II it re nes details of
5Figure 2: Example bird image synthesis results given
text descriptions as input with an attention mechanism.
Key words in the input sentences are correctly captured
and represented in the generated images. Image taken
from AttnGAN [133].
the objects and generates a higher resolution image.
Unlike [100] which transforms high dimensional text
encoding into low dimensional latent variables, Stack-
GAN adopts a Conditioning Augmentation which is to
sample the latent variables from an independent Gaus-
sian distribution parameterized by the text encoding.
Experiments on the Oxford-102 [90], CUB [128] and
COCO [72] datasets show that StackGAN can generate
compelling images with resolution up to 256 256. In
StackGAN++ [144], the authors extended the original
StackGAN into a more general and robust model which
contains multiple generators and discriminators to han-
dle images at di erent resolutions. Then, Zhang et
al.[146] extended the multi-stage generation idea by
proposing a HDGAN model with a single-stream gen-
erator and multiple hierarchically-nested discrimina-
tors for high-resolution image synthesis. Hierarchically-
nested discriminators distinguish outputs from interme-
diate layers of the generator to capture hierarchical vi-
sual features. The training of HDGAN is done via opti-
mizing a pair loss [100] and a patch-level discriminator
loss [43].
In addition to generation via multi-stage re ne-
ment [143, 144], the attention mechanism is introduced
to improve text to image synthesis at a more ne-
grained level. Xu et al. introduced AttnGAN [133],
an attention driven image synthesis model that gener-
ates images by focusing on di erent regions described
by di erent words of the text input. A Deep Attentional
Multimodal Similarity Model (DAMSM) module is also
proposed to match the learned embedding between im-
age regions and text at the word level. To achieve better
semantic consistency between text and image, Qiao et
al.[98] proposed MirrorGAN which guides the image
generation with both sentence- and word-level atten-
tion and further tried to reconstruct the original text
input to guarantee the image-text consistency. Thebackbone of MirrorGAN uses a multi-scale generator as
in [144]. The proposed text reconstruction model is pre-
trained to stabilize the training of MirrorGAN. Zhu et
al.[152] introduces a gating mechanism where a writing
gate writes selected important textual features from the
given sentence into a dynamic memory, and a response
gate adaptively reads from the memory and the visual
features from some initially generated images. The pro-
posed DM-GAN relies less on the quality of the initial
images and can re ne poorly-generated initial images
with wrong colors and rough shapes.
To learn expression variants in di erent text descrip-
tions of the same image, Yin et al. proposes SD-
GAN [136] to distill the shared semantics from texts
that describe the same image. The authors propose a
Siamese structure with a contrastive loss to minimize
the distance between images generated from descrip-
tions of the same image, and maximize the distance
between those generated from the descriptions of dif-
ferent images. To retain the semantic diversity for ne-
grained image generation, a semantic-conditioned batch
normalization is also introduced for enhanced visual-
semantic embedding.
Location and Layout Aware Generation. With
advances in correspondence learning between text and
image, content described in the input text can already
be well captured in the generated image. However, to
achieve ner control of generated images such as object
locations, additional inputs or intermediate steps are of-
ten required. For text-based and location-controllable
synthesis, Reed et al. [101] proposes to generate images
conditioned on both the text description and object lo-
cations. Built upon the similar idea of inferring scene
structure for image generation, Hong et al. [37] intro-
duces a novel hierarchical approach for text-to-image
synthesis by inferring semantic layout from the text de-
scription. Bounding boxes are rst generated from text
input through an auto-regressive model, then semantic
layouts are re ned from the generated bounding boxes
using a convolutional recurrent neural network. Con-
ditional on both the text and the semantic layouts,
the authors adopt a combination of pix2pix [43] and
CRN [12] image-to-image translation model to gener-
ate the nal images. With predicted semantic layouts,
this work [37] has potential in generating more realis-
tic images containing complex objects such as those in
the MS-COCO [72] dataset. Li et al. [63] extends the
work by [37] and introduces Obj-GAN, which generates
salient objects given text description. Semantic layout
is rst generated as in [37] then later converted into the
synthetic image. A Fast R-CNN [28] based object-wise
discriminator is developed to retain the matching be-
tween generated objects and the input text and layout.
Experiments on the MS-COCO dataset show improved
performance in generating complex scenes compared to
6previous methods.
Compared to [37], Johnson et al. [46] includes an-
other intermediate step which converts the input sen-
tences into scene graphs before generating the semantic
layouts. A graph convolutional network is developed to
generate embedding vectors for each object. Bounding
boxes and segmentation masks for each object, consti-
tuting the scene layout, are converted from the object
embedding vectors. Final images are synthesized by a
CRN model [12] from the noise vectors and scene lay-
outs. In addition to text input, [46] also allows direct
generation from input scene graphs. Experiments are
conducted on Visual Genome [56] dataset and COCO-
Stu [7] dataset which is augmented on a subset of
the MS-COCO [72] dataset, and show better depiction
of complex sentences with many objects than previous
method [143].
Without taking the complete semantic layout as ad-
ditional input, Hinz et al. [35] introduces a model con-
sisting of a global pathway and an object pathway for
ner control of object location and size within an image.
The global pathway is responsible for creating a general
layout of the global scene, while the object pathway gen-
erates object features within the given bounding boxes.
Then the outputs of the global and object pathways are
combined to generate the nal synthetic image. When
there is no text description available, [35] can take a
noise vector and the individual object bounding boxes
as input.
Taking an approach di erent from GAN based meth-
ods, Tan et al. [113] proposes a Text2Scene model
for text-to-scene generation, which learns to sequen-
tially generate objects and their attributes such as lo-
cation, size, and appearance at every time step. With a
convolutional recurrent module and attention module,
Text2Scene can generate abstract scenes and object lay-
outs directly from descriptive sentences. For image syn-
thesis, Text2Scene retrieves patches from real images to
generate the image composites.
Fusion of Conditional and Unconditional Gen-
eration. While most existing text-to-image synthe-
sis models are based on conditional image generation,
Bodla et al. [6] proposes a FusedGAN which combines
unconditional image generation and conditional image
generation. An unconditional generator produces a
structure prior independent of the condition, and the
other conditional generator re nes details and creates
an image that matches the input condition. FusedGAN
is evaluated on both the text-to-image generation task
and the attribute-to-face generation task which will be
discussed later in Sec. 4.3.1.
Evaluation Metrics for Text to Image Synthe-
sis. Widely used metrics for image synthesis such
as IS [103] lack awareness of matching between the text
and generated images. Recently, more e orts have beenfocused on proposing more accurate evaluation metrics
for text to image synthesis and for evaluating the corre-
spondence between generated image content and input
condition. R-precision is proposed in [133] to evaluate
whether a generated image is well conditioned on the
given text description. Hinz et al. proposes the Seman-
tic Object Accuracy (SOA) score [36] which uses a pre-
trained object detector to check whether the generated
image contains the objects described in the caption, es-
pecially for the MS-COCO dataset. SOA shows better
correlation with human perception than IS in the user
study and provides a better guidance for training text
to image synthesis models.
Benchmark Datasets. For text-guided image synthe-
sis tasks, popular benchmark datasets include datasets
with a single object category and datasets with multiple
object categories. For single object category datasets,
the Oxford-102 dataset [90] contains 102 di erent types
of owers common in the UK. The CUB dataset [128]
contains photos of 200 bird species of which mostly are
from North America. Datasets with multiple object cat-
egories and complex relationships can be used to train
models for more challenging image synthesis tasks. One
such dataset is MS-COCO [72], which has a training set
with 80k images and a validation set with 40k images.
Each image in the COCO dataset has ve text descrip-
tions.
4.2 Image-like Inputs
In this section, we summarize image synthesis works
based on three types of intuitive inputs, namely sketch,
semantic map and pose. We call them \image-like in-
puts" because all of them can be, and have been repre-
sented as rasterized images. Therefore, synthesizing im-
ages from these image-like inputs can be regarded as an
image-to-image translation problem. Several works pro-
vide general solutions to this problem, like pix2pix [43]
and pix2pixHD [124]. In this survey, we focus on works
that deal with a speci c type of input.
4.2.1 Sketches and Strokes as Input
Sketches, or line drawings, can be used to express users'
intention in an intuitive way, even for those without
professional drawing skills. With the widespread use
of touch screens, it has become very easy to create
sketches; and the research community is paying increas-
ingly more attention to the understanding and pro-
cessing of hand-drawn sketches, especially in applica-
tions such as sketch-based image retrieval and sketch-
to-image generation. Generating realistic images from
sketches is not a trivial task, since the synthesized
images need to be aligned spatially with the given
sketches, while maintain semantic coherence.
7Figure 3: A classical pipeline of retrieval-and-
composition methods for synthesis. Candidate images
are generated by composing image segments retrieved
from a pre-built image database. Image taken from [15].
Retrieval-and-Composition based Approaches.
Early approaches of generating image from sketch
mainly take a retrieval-and-composition strategy. For
each object in the user-given sketch, they search for
candidate images in a pre-built object-level image (frag-
ment) database, using some similarity metric to evalu-
ate how well the sketch matches the image. The nal
image is synthesized as the composition of retrieved re-
sults, mainly by image blending algorithms. Chen et
al. [15] presented a system called Sketch2Photo, which
composes a realistic image from a simple free-hand
sketch annotated with text labels. The authors pro-
posed a contour-based ltering scheme to search for
appropriate photographs according to the given sketch
and text labels, and proposed a novel hybrid blending
algorithm, which is a combination of alpha blending
and Poisson blending, to improve the synthesis qual-
ity. Eitz et al. [24] created Photosketcher, a system
that nds semantically relevant regions from appropri-
ate images in a large image collection and composes
the regions automatically. Users can also interact with
the system by drawing scribbles on the retrieved images
to improve region segmentation quality, re-sketching to
nd better candidates, or choosing from di erent blend-
ing strategies. Hu et al. [38] introduced PatchNet, a
hierarchical representation of image regions that sum-
marizes a homogeneous image patch by a graph node
and represents geometric relationships between regions
by labeled graph edges. PatchNet was shown to be a
compact representation that can be used eciently for
sketch-based, library-driven, interactive image editing.
Wang et al. [120] proposed a sketch-based image syn-
thesis method that compares sketches with contours of
object regions via the GF-HOG descriptor, and novel
images are composited by GrabCut followed by Pos-
sion blending or alpha blending. For generating images
of a single object like an animal under user-speci ed
poses and appearances, Turmukhambetov et al. [115]
presented a sketch-based interactive system that gener-
ates the target image by composing patches of nearest
neighbour images on the joint manifold of ellipses and
contours for object parts.Deep Learning based Approaches. In recent
years, deep convolutional neural networks (CNNs) have
achieved signi cant progress in image-related tasks.
CNNs have been used to map sketches to images with
the bene t of being able to synthesize novel images
that are di erent from those in pre-built databases.
One challenge to using deep CNNs is that training of
such networks require paired sketch-image data, which
can be expensive to acquire. Hence, various techniques
have been proposed to generate synthetic sketches from
images, and then use the synthetic sketch and image
pairs for training. Methods for synthetic sketch gen-
eration include boundary detection algorithms such as
Canny, Holistically-nested Edge Detection (HED) [132],
and stylization algorithms for image-to-sketch conver-
sion [130, 48, 64, 62, 26]. Post-processing steps are
adopted for small stroke removal, spline tting [32] and
stroke simpli cation [108]. A few works utilize crowd-
sourced free-hand sketches for training [25, 73]. They ei-
ther construct pseudo-paired data by matching sketches
and images [25], or propose a method that does not re-
quire paired data [73]. Another aspect of CNN train-
ing that has been investigated is the representation of
sketches. In some works [16, 68], the input sketches
are transformed into distance elds to obtain a dense
representation, but no experimental comparisons have
been done to demonstrate which form of input is more
suitable for CNNs to process. Next, we review speci c
works that utilize a deep-learning based approach for
sketch to image generation.
Treating a sketch as an \image-like" input, several
works use a fully convolutional neural network archi-
tecture to generate photorealistic images. Gucluturk et
al. [30] rst attempted to use deep neural networks to
tackle the problem of sketch-based synthesizing. They
developed three di erent models to generate face im-
ages from three di erent types of sketches, namely line
sketch, grayscale sketch and color sketch. An encoder-
decoder fully convolutional neural network is adopted
and trained with various loss terms. A total variation
loss is proposed to encourage smoothness. Sangkloy et
al. [106] proposed Scribbler, a system that can generate
realistic images from human sketches and color strokes.
XDoG lter is used for boundary detection to gener-
ate image-sketch pairs and color strokes are sampled to
provide color constraints in training. The authors also
use an encoder-decoder network architecture and adopt
similar loss functions as in [30]. The users can interact
with the system in real time. The authors also provide
applications for colorization of grayscale images.
Generative Adversarial Networks have also been used
for sketch-to-image synthesis. Chen et al. [16] proposed
a novel GAN-based architecture with multi-scale inputs
for the problem. The generator and discriminator both
consist of several Masked Residual Unit (MRU) blocks.
8MRU takes in a feature map and an image, and outputs
a new feature map, which can allow a network to re-
peatedly condition on an input image, like the recurrent
network. They also adopt a novel data augmentation
technique, which generates sketch-image pairs automat-
ically through edge detection and some post-processing
steps including binarization, thinning, small component
removal, erosion, and spur removal. To encourage diver-
sity of generated images, the authors proposed a diver-
sity loss, which maximizes the L1 distance between the
outputs of two identical input sketches with di erent
noise vectors. Lu et al. [77] considered the sketch-to-
image synthesis problem as an image completion task
and proposed a contextual GAN for the task. Unlike
a traditional image completion task where only part of
an object is masked, the entire real image is treated
as the missing piece in a joint image that consists of
both sketch and the corresponding photo. The advan-
tage of using such a joint representation is that, in-
stead of using the sketch as a hard constraint, the sketch
part of the joint image serves as a weak contextual con-
straint. Furthermore, the same framework can also be
used for image-to-sketch generation where the sketch
would be the masked or missing piece to be completed.
Ghosh et al. [27] presents an interactive GAN-based
sketch-to-image translation system. As the user draws
a sketch of a desired object type, the system automati-
cally recommends completions and lls the shape with
class-conditioned texture. The result changes as the
user adds or removes strokes over time, which enables
a feedback loop that the user can leverage for interac-
tive editing. The system consists of a shape completion
stage based on a non-image generation network [84],
and a class-conditioned appearance translation stage
based on the encoder-decoder model from MUNIT [41].
To perform class-conditioning more e ectively, the au-
thors propose a soft gating mechanism, instead of using
simple concatenation of class codes and features.
Several works focus on sketch-based synthesis for hu-
man face images. Portenier et al. [94] developed an
interactive system for face photo editing. The user can
provide shape and color constraints by sketching on the
original photo, to get an edited version of it. The edit-
ing process is done by a CNN, which is trained on ran-
domly masked face photos with sampled sketches and
color strokes in an adversarial manner. Xia et al. [131]
proposed a two-stage network for sketch-based portrait
synthesis. The stroke calibration network is responsible
for converting the input poorly-drawn sketch to a more
detailed and calibrated one that resembles edge maps.
Then the re ned sketch is used in the image synthe-
sis network to get a photo-realistic portrait image. Li
et al. [68] proposed a self-attention module to capture
long-range connections of sketch structures, where self-
attention mechanism is adopted to aggregate featuresfrom all positions of the feature map by the calculated
self-attention map. A multi-scale discriminator is used
to distinguish patches of di erent receptive elds, to si-
multaneously ensure local and global realism. Chen et
al. [14] introduced DeepFaceDrawing, a local-to-global
approach for generating face images from sketches that
uses input sketches as soft constraints and is able to pro-
duce high-quality face images even from rough and/or
incomplete sketches. The key idea is to learn feature
embeddings of key face components and then train a
deep neural network to map the embedded component
features to realistic images.
While most works in sketch-to-image synthesis with
deep learning techniques have focused on synthesiz-
ing object-level images from sketches, Gao et al. [25]
explored synthesis at the scene level by proposing a
deep learning framework for scene-level image gener-
ation from freehand sketches. The framework rst
segments the sketch into individual objects, recog-
nizes their classes, and categories them into fore-
ground/background objects. Then the foreground ob-
jects are generated by an EdgeGAN module that learns
a common vector representation for images and sketches
and maps the vector representation of an input sketch
to an image. The background generation module is
based on the pix2pix [43] architecture. The synthe-
sized foregrounds along with background sketches are
fed to a network to get the nal generated scene. To
train the network and evaluate their method, the au-
thors constructed a composite dataset called Sketchy-
COCO based on the Sketchy database [105], Tuberlin
dataset [23], QuickDraw dataset, and COCO Stu [8].
Considering that collecting paired training data can
be labor intensive, learning from unpaired sketch-photo
data in an unsupervised setting is an interesting di-
rection to explore. Liu et al. [73] proposed an unsu-
pervised solution by decomposing the synthesis process
into a shape translation stage and a content enrichment
stage. The shape translation network transforms an in-
put sketch into a gray-scale image, trained using un-
paired sketches and images, under the supervision of a
cycle-consistency loss. In the content enrichment stage,
a reference image can be provided as style guidance,
whose information is injected into the synthesis process
following the AdaIN framework [40].
Benchmark Datasets. For synthesis from sketches,
various datasets covering multiple types of objects are
used [139, 55, 137, 138, 128, 76, 49, 105, 125, 72, 8].
However, only a few of them [139, 105, 125] have
paired image-sketch data. For the other datasets, edge
maps or line strokes are extracted using edge extrac-
tion or style transfer techniques and used as fake sketch
data for training and validation. SketchyCOCO [25]
built a paired image-sketch dataset from existing image
datasets [8] and sketch datasets [105, 23] by looking for
9the most similar sketch with the same class label for
each foreground object in a natural image.
4.2.2 Semantic Label Maps as Input
SemanticMapGroundTruthPix2PixHDSPADESEAN
Figure 4: Illustration for image synthesis from semantic
label maps. Image taken from [153].
Synthesizing photorealistic images from semantic la-
bel maps is the inverse problem of semantic image seg-
mentation. It has applications in controllable image
synthesis and image editing. Existing methods either
work with a traditional retrieval-and-composition ap-
proach [47, 3], a deep learning based method [13, 58,
93, 74, 155, 114], or a hybrid of the two [96]. Di er-
ent types of datasets are utilized to allow synthesiz-
ing images of various scenes or subjects, such as in-
door/outdoor scenes, or human bodies.
Retrieval-and-Composition based Methods.
Non-parametric methods follow the traditional
retrieval-and-composition strategy. Johnson et al. [47]
rst proposed to synthesize images from semantic
concepts. Given an empty canvas, the user can
paint regions with corresponding keywords at desired
locations. The algorithm searches for candidate
images in the stock and uses a graph-cut based seam
optimization process to generate realistic photographs
for each combination. The best combination with
the minimum seam cost is chosen as the nal result.
Bansal et al. [3] proposed a non-parametric matching
and hierarchical composition strategy to synthesize
realistic images from semantic maps. The strategy
consists of four stages: a global consistency stage to
retrieve relevant samples based on indicator vectors of
presented categories, a shape consistency stage to nd
candidate segments based on shape context similarity
between the input label mask and the ones in the
database, a part consistency stage and a pixel consis-
tency stage that re-synthesize patches and pixels based
on best-matching areas as measured by Hamming
distance. The proposed method outperforms state-
of-the-art parametric methods like pix2pix [43] and
pix2pixHD [124] both qualitatively and quantitatively.Deep Learning based Methods. Methods based on
deep learning mainly vary in network architecture de-
sign and optimization objective. Chen et al. [13] pro-
posed a regression approach for synthesizing realistic
images from semantic maps, without the need for adver-
sarial training. To improve synthesis quality, they pro-
posed a Cascaded Re nement Network (CRN), which
progressively generates images from low resolution to
high resolution (up to 2 megapixels at 1024x2048 pixel
resolution) through a cascade of re nement modules.
To encourage diversity in generated images, the authors
proposed a diversity loss, which lets the network out-
put multiple images at a time and optimize diversity
within the collection. Wang et al. [123] proposed a style-
consistent GAN framework that generates images given
a semantic label map input and an exemplary image
indicating style. A novel style-consistent discriminator
is designed to determine whether a pair of images are
consistent in style and an adaptive semantic consistency
loss is optimized to ensure correspondence between the
generated image and input semantic label map.
Having found that directly synthesizing images from
semantic maps through a sequence of convolutions
sometimes provides non-satisfactory results because of
semantic information loss during forward propagation,
some works seek to better use the input semantic map
and preserve semantic information in all stages of the
synthesis network. Park et al. [93] proposed a spatially-
adaptive normalization layer (SPADE), which is a nor-
malization layer with learnable parameters that utilizes
the original semantic map to help retain semantic infor-
mation in the feature maps after the traditional batch
normalization. The authors incorporated their SPADE
layers into the pix2pixHD architecture and produced
state-of-the-art results on multiple datasets. Liu et
al. [74] argues that the convolutional network should
be sensitive to semantic layouts at di erent locations.
Thus they proposed Conditional Convolution Blocks
(CC Block), where parameters for convolution kernels
are predicted from semantic layouts. They also pro-
posed a feature pyramid semantics-embedding (FPSE)
discriminator, which predicts semantic alignment scores
in addition to real/fake scores. It explicitly forces the
generated images to be better aligned semantically with
the given semantic map. Zhu et al. [155] proposed a
Group Decreasing Network (GroupDNet). GroupDNet
utilizes group convolutions in the generator and the
group number in the decoder decreases progressively.
Inspired by SPADE, the authors also proposed a novel
normalization layer to make better use of the informa-
tion in the input semantic map. Experiments show that
the GroupDNet architecture is more suitable for the
multi-modal image synthesis (SMIS) task, and can pro-
duce plausible results.
Observing that results from existing methods often
10lack detailed local texture, resulting from large objects
dominating the training, Tang et al. [114] aims for bet-
ter synthesis of small objects in the image. In their
design, each class has its own class-level generation net-
work that is trained with feedback from a classi cation
loss, and all the classes share an image-level global gen-
erator. The class-level generator generates parts of the
image that correspond to each class, from masked fea-
ture maps. All the class-speci c image parts are then
combined and fused with the image-level generation re-
sult. In another work, to provide more ne-grained in-
teractivity, Zhu et al. [153] proposed semantic region-
adaptive normalization (SEAN), which allows manipu-
lation of each semantic region individually, to improve
image quality.
Integration methods. While deep learning based
generative methods are better able to synthesize novel
images, traditional retrieval-and-composition methods
generate images with more reliable texture and less ar-
tifacts. To combine the advantages of both parametric
and non-parametric methods, Qi et al. [96] presented a
semi-parametric approach. They built a memory bank
oine, containing segments of di erent classes of ob-
jects. Given an input semantic map, segments are rst
retrieved using a similarity metric de ned by IoU score
of the masks. The retrieved segments are fed to a spa-
tial transformer network where they are aligned, and
further put onto a canvas by an ordering network. The
canvas is re ned by a synthesis network to get the nal
result. This combination of retrieval-and-composition
and deep-learning based methods allows high- delity
image generation, but it takes more time during infer-
ence and the framework is not end-to-end trainable.
Benchmark Datasets. For synthesis from seman-
tic label maps, experiments are mainly conducted on
datasets of human body [69, 70, 75], human face [59],
indoor scenes [149, 150, 89] and outdoor scenes [18].
Lassner et al. [58] augmented the Chictopia10K [69, 70]
dataset by adding 2D keypoint locations and tted
SMPL body models, and the augmented dataset is used
by Bem et al. [19]. Park et al. [93] and Zhu et al. [153]
collected images from the Internet and applied state-of-
the-art semantic segmentation models [10, 11] to build
paired datasets.
4.2.3 Poses as Input
Given a reference person image, its corresponding pose,
and a novel pose, pose-based image synthesis meth-
ods can generate an image of the person in that novel
pose. Di erent from synthesizing images from sketches
or semantic maps, pose-guided synthesis requires novel
views to be generated, which cannot be done by the
retrieval and composition pipeline. Thus we focus on
reviewing deep learning-based methods [2, 79, 80, 107,95, 19, 22, 65, 111, 154]. In these methods, a pose is of-
ten represented as a set of well-de ned body keypoints.
Each of the keypoints can be modeled as an isotropic
Gaussian that is centered at the ground-truth joint lo-
cation and has a small standard deviation, giving rise
to a heatmap. The concatenation of the joint-centered
heatmaps then can be used as the input to the image
synthesis network. Heatmaps of rigid parts and the
whole body can also be utilized [19].
Supervised Deep Learning Methods. In a super-
vised setting, ground truth target images under target
poses are required for training. Thus, datasets with the
same person in multiple poses are needed. Ma et al. [79]
proposed the Pose Guided Person Generation Network
for generating person images under given poses. It
adopts a GAN-like architecture and generates images
in a coarse-to- ne manner. In the coarse stage, an im-
age of a person along with a novel pose are fed into the
U-Net based generator, where the pose is represented as
heatmaps of body keypoints. The coarse output is then
concatenated again with the person image, and a re ne-
ment network is trained to learn a di erence map that
can be added to the coarse output to get the nal re-
ned result. The discriminator is trained to distinguish
synthesized outputs and real images. Besides the GAN
loss, an L1 loss is used to measure dissimilarity between
the generated output and the target image. Since the
target image may have di erent background from the
input condition image, the L1 loss is modi ed to give
higher weight to the human body utilizing a pose mask
derived from the pose skeleton.
Although GANs have achieved great success in im-
age synthesis, there are still some diculties when it
comes to pose-based synthesis, one of which being the
deformation problem. The given novel pose can be
drastically di erent from the original pose, resulting in
large deformations in both shape and texture in the
synthesized image and making it hard to directly train
a network that is able to generate images without ar-
tifacts. Existing works mainly adopt transformation
strategies to overcome this problem, because transfor-
mation makes it explicit about which body part will
be moved to which place, being aware of the original
and target poses. These methods usually transform
body parts of the original image [2], the human parsing
map [22], or the feature map [107, 22, 154]. Balakrish-
nan et al. [2] explicitly separate the human body from
the background and synthesize person images of unseen
poses and background in separate steps. Their method
consists of four modules: a segmentation module that
produces masks of the whole body and each body part
based on the source image and pose; a transformation
module that calculates and applies ane transforma-
tion to each body part and corresponding feature maps;
a background generation module that applies inpaint-
11ing to ll the body-removed foreground region; and a
nal integration module that uses the transformed fea-
ture maps and the target pose to get the synthesized
foreground, which is then combined with the inpainted
background to get the nal result. To train the net-
work, they use a VGG-19 perceptual loss along with a
GAN loss. Siarohin et al. [107] noted that it is hard for
the generator to directly capture large body movements
because of the restricted receptive eld, and introduced
deformable GANs to tackle the problem. The method
decomposes the body joints into several semantic parts,
and calculates an ane transform from the source to
the target pose for each part. The ane transforms
are used to align the feature maps of the source image
with the target pose. The transformed feature maps are
then concatenated with the target pose features and de-
coded to synthesize the output image. The authors also
proposed a novel nearest-neighbor loss based on feature
maps, instead of using L1 or L2 loss. Their method is
more robust to large pose changes and produces higher
quality images compared to [79]. Dong et al. [22] utilize
parsing results as a proxy to achieve better synthesizing
results. They rst estimate parsing results for the target
pose, then t a Thin Plate Spline (TPS) transformation
between the original and estimated parsing maps. The
TPS transformation is further applied to warp the fea-
ture maps for feature alignment and a soft-gated warp-
ing block is developed to provide controllability to the
transformation degree. The nal image is synthesized
based on the transformed feature maps. Zhu et al. [154]
proposed that large deformations can be divided into a
sequence of small deformations, which are more friendly
to network training. In this way, the original pose can
be transformed progressively, through many interme-
diate poses. They proposed a Pose-Attentional Trans-
fer Block (PATB), which transforms the feature maps
under the guidance of an attention mask. By stack-
ing multiple PATBs, the feature maps undergo several
transformations and the transformed maps are used to
synthesize the nal result.
While most of the deep learning based methods
for synthesis from poses adopt an adversarial train-
ing paradigm, Bem et al. [19] proposed a conditional-
VAEGAN architecture that combines a conditional-
VAE framework and a GAN discriminator module to
generate realistic natural images of people in a uni ed
probabilistic framework where the body pose and ap-
pearance are kept as separated and interpretable vari-
ables, allowing the sampling of people with independent
variations of pose and appearance. The loss function
used includes both conditional-VAE and GAN losses
composed of L1 reconstruction loss, closed-form KL-
divergence loss between recognition and prior distribu-
tions, and discriminator cross-entropy loss.
Unsupervised Deep Learning Methods. Theaforementioned pose-to-image synthesis methods re-
quire ground truth images under target poses for train-
ing because of their use of L1, L2 or perceptual
losses. To eliminate the need for target images, some
works focus on the unsupervised setting of this prob-
lem [95, 111], where the training process does not re-
quire ground truth image of the target pose. The basic
idea is to ensure cycle consistency. After the forward
pass, the synthesized result along with the target pose
will be treated as the reference, and be used to synthe-
size the image under the original reference pose. This
synthesized image should be consistent with the origi-
nal reference image. Pumarola et al. [95] further uti-
lize a pose estimator, to ensure pose consistency. Song
et al. [111] use parsing maps as supervision instead of
poses. They predict parsing maps under new target
poses and use them to synthesize the corresponding im-
ages. Since the parsing maps under the target poses are
not available due to operating in the unsupervised set-
ting, the authors proposed a pseudo-label selection tech-
nique to get \fake" parsing maps by searching for the
ones with the same clothes type and minimum trans-
formation energy.
Benchmark Datasets. For synthesis from poses, the
DeepFashion [75] and Market-1501 [148] datasets are
most widely used. The DeepFashion dataset is built
for clothes recognition but has also been used for pose-
based image synthesis because of the rich annotations
available such as clothing landmarks as well as im-
ages with corresponding foreground but diverse back-
grounds. The Market-1501 dataset was initially intro-
duced for the purpose of person re-identi cation, and
it contains a large number of person images produced
using a pedestrian detector and annotated bounding
boxes; also, each identity has multiple images from dif-
ferent camera views.
4.3 Other Input Modalities
Except for text descriptions and image-like inputs,
there are other intuitive user inputs such as class la-
bels, attribute vectors, and graph-like inputs.
4.3.1 Visual Attributes as Input
In this subsection, we mainly focus on works that use
one of the ne-grained class conditional labels or vec-
tors, i.e.visual attributes, as inputs. Visual attributes
provide a simple and accurate way of describing ma-
jor features present in images, such as in describing at-
tributes of a certain category of birds or details of a
person's face. Current methods either take a discrete
one-hot vector as attribute labels, or a continuous vec-
tor as visual attribute input.
Yan et al. [135] proposes a disentangling CVAE (dis-
CVAE) for conditioned image generation from visual at-
12tributes. While conditional Variational Auto-Encoder
(cVAE) [109] generates images from the posterior con-
ditioned on both the conditions and random vectors,
disCVAE interprets an image as a composite of a
foreground layer and a background layer. The fore-
ground layer is conditioned on visual attributes and the
whole image is generated through a gated integration.
Attribute-conditioned experiments are often conducted
on the LFW [39] and CUB [128] datasets.
For face generation with visual attribute inputs, one
related application is manipulating existing face im-
ages with provided attributes. AttGAN [33] applies
attribute classi cation constraint and reconstruction
learning to guarantee the change of desired attributes
while maintaining other details. Zhang et al. [140]
proposes spatial attention which can localize attribute-
speci c regions to perform desired attribute manipula-
tion and keep the rest unchanged. Unlike other works
utilizing attributes input, Qian et al. [97] explores face
manipulation via conditional structure input. Given
structure prior as conditional input of the cVAE, AF-
VAE [97] can arbitrarily modify facial expressions and
head poses using geometry-guided feature disentangle-
ment and additive Gaussian Mixture prior for appear-
ance representation. Most such face image manipula-
tion works perform experiments on commonly used face
image datasets such as the CelebA [76] dataset.
For controllable person image synthesis, Men et
al.[83] introduces Attribute-Decomposed GAN, where
visual attributes including clothes are extracted from
reference images and combined with target poses to
generate target images with desired attributes. The
separation and decomposition of attributes from exist-
ing images provide a new way of synthesizing person
images without attribute annotations.
Another interesting application of taking visual at-
tributes as input is fashion design. Lee et al. [60] pro-
poses a GAN model with an attentional discrimina-
tor for attribute-to-fashion generation. For multiple-
attribute inputs, multiple independent Gaussian distri-
butions are derived by mapping each attribute vector to
the mean vector and diagonal covariance matrix. The
prior distribution for attribute combination is the prod-
uct of all independent Gaussians. Experiments are con-
ducted on a dataset consisting of dress images collected
from a popular fashion site.
In terms of image generation methodology using vi-
sual attributes as inputs, the Glow model introduced
in [52] as a generative ow model using an invertible
11 convolution shows great potentials. Compared
with VAEs and GANs, ow models have merits includ-
ing reversible generation, meaningful latent space, and
memory eciency. Glow consists of a series of steps of
ow, where each step consists of activation normaliza-
tion followed by an invertible 1 1 convolution, then
Figure 5: Example scene graph to image synthesis re-
sults. Scene graphs are often extracted from text de-
scriptions. Correct object relationships embedded in
input scene graphs are re ected in the generated im-
ages. Image taken from [46].
followed by a coupling layer. On the Cifar10 dataset,
Glow achieves better negative log likelihood than Real-
NVP [21]. On the CelebA-HQ dataset, Glow generates
high delity face images and also allows meaningful vi-
sual attribute manipulation.
Benchmark Datasets. For attributes-guided syn-
thesis tasks, major benckmarking datasets include Vi-
sual Genome, CelebA(-HQ), and Labeled Faces in the
Wild. Visual Genome [56] contains over 100K images
where each image has an average of 21 objects, 18 at-
tributes, and 18 pairwise relationships between objects.
The CelebA [76] dataset has a 40 dimensional binary
attribute vector annotated for each face image. The
CelebA-HQ dataset [49] consists of 30,000 high reso-
lution images from the CelebA dataset. The Labeled
Faces in the Wild (LFW) dataset contains face images
that are segmented and labeled with semantically mean-
ingful region labels (e.g., hair, skin).
4.3.2 Graphs and Layouts as Input
Another interesting type of intuitive user input is
graphs (Fig. 5). Graphs can encode multiple relation-
ships in a concise way and have very unique characteris-
tics such as sparse representation. An example applica-
tion of graph-based inputs is architecture design using
scene graphs, layouts, and other similar modalities.
Johnson et al. [46], as mentioned earlier in Section
4.1, can take a scene graph and generate the corre-
sponding layout. The nal image is then synthesized
by a CRN model [12] from a noise vector and the lay-
out. Figure 5 demonstrates some results from [46].
To generate images that exhibit complex relation-
13ships among multiple objects, Zhao et al. [147] proposes
a Layout2Im model that uses layout as input to gener-
ate images. The layout is speci ed by multiple bound-
ing boxes of objects with category labels. Training of
the model is done by taking groundtruth images with
their layouts, and testing is done by sampling object la-
tent codes from a normal distribution. An object com-
poser takes the word embedding of input text, object
latent code, and bounding box locations to composite
object feature maps. The object feature maps are then
composed using convolutional LSTM into a hidden fea-
ture map and decoded into the nal image.
Also containing the idea of converting layout to im-
age, LayoutGAN [61] uses a di erentiable wireframe
rendering layer with an image-based discriminator that
can generate layout from graphical element inputs.
Semantic and spatial relations between elements are
learned via a stacked relation module with self atten-
tion, and experiments on various datasets show promis-
ing results in generating meaningful layouts which can
be also rasterized.
Luoet al. [78] proposes a variational generative model
which generates 3D scene layouts given input scene
graphs. cVAE is combined with the graph convolution
network (GCN) [53] for layout synthesis. The authors
also present a rendering model which rst instantiates
a 3D model by retrieving object meshes, then utilizes a
di erentiable renderer to render the corresponding se-
mantic image and the depth image. Their experiments
on the SUNCG dataset [110] show that the method can
generate accurate and diverse 3D scene layouts and has
potential in various downstream scene layout and image
synthesis tasks.
5 Summary and Trends
5.1 Advances in Model Architecture
Design and Training Strategy
Among di erent attempts of improving the synthesized
image quality and the correspondence between user
input and generated image, several successful designs
are incorporated into multiple conditional generative
models and have proven their e ectiveness in various
tasks. For instance, a hierarchical generation archi-
tecture has been widely used by di erent models, in-
cluding GANs [144, 17, 124] and VAEs [116], in order
to generate high-resolution, high-quality images in a
multi-stage, progressive fashion. Attention-based mech-
anisms are proposed and incorporated in multiple works
[133, 141] towards more ne-grained control over re-
gions within generated images. To ensure correspon-
dence between user input and generated images, vari-
ous designs are proposed for generative neural networks:
Relatively straightforward methods take the combina-tion of user input and other input (e.g., latent vector) as
input to the generative model; other methods take the
user input as part of the supervision signal to measure
the correspondence between input and output; more ad-
vanced methods, which may also be more e ective, com-
bine transformed inputs together, such as in projection
discriminator [88] and spatially-adaptive normalization
[93].
While most of the current successful models are based
on GANs, it is well-known that GAN training is dicult
and can be unstable. Similar to general purpose GANs,
works focusing on image synthesis with intuitive user
inputs also adopt di erent design and training strate-
gies to ease and stabilize the GAN training. Commonly
used normalisations include conditional batch normal-
ization [20] and spectral normalization [87]; commonly
used adversarial losses include WGAN loss with di er-
ent regularizations [1, 31], LS-GAN loss [82] and Hinge
loss [71]. To balance the training of the generator and
the discriminator, imbalanced training strategies such
as two time-scale update rule (TTUR) [34] have also
been adopted for better convergence.
General losses employed in di erent models heavily
depend on the methodological framework. Retrieval
and composition methods typically do not need to be
trained, therefore no loss is used. For GAN-like mod-
els, an adversarial loss is essential in a majority of the
models, which combines a loss for the generator and a
loss for the discriminator in order to push the generator
toward generating fake samples that match the distribu-
tion of real examples. Widely used adversarial losses in-
clude the minimax loss introduced in the original GAN
paper [29] and the Wasserstein loss introduced in the
WGAN paper [1]. VAE models are typically trained by
minimizing a reconstruction error between the encoder-
decoded data and the initial data, with some regular-
ization of the latent space [51]. To evaluate the visual
quality of generated images and optimize toward better
image quality, perceptual loss [45] or adversarial feature
matching loss [103] have been adopted by many exist-
ing works, especially when paired supervision signal is
available.
Alongside the general losses, auxiliary losses are often
incorporated in models to better handle di erent tasks.
Task-speci c losses, as well as evaluation metrics, are
natural choices to evaluate and improve task-speci c
performances. Depending on the output modalities, one
commonly used loss or metric is to recover the input
condition from the synthesized images. For instance,
image captioning losses can be included in text-to-image
synthesis models [98], and pose prediction losses can
complement the general losses in pose-to-image synthe-
sis tasks [95, 111].
145.2 Summary on Methods using Spe-
ci c Input Types
Recent advances in text-to-image synthesis have been
mainly based on deep learning methods, especially
GANs. Two major challenges of the text-to-image syn-
thesis task are learning the correspondence between
text descriptions and generated images, and ensuring
the quality of generated images. The text-image corre-
spondence problem has been addressed in recent years
with advanced embedding techniques of text descrip-
tions and special designs such as attention mechanisms
used to match words and image regions. For the qual-
ity of generated images, however, promising results are
still limited to generating narrow categories of objects.
For general scenes where multiple objects co-exist with
complex relationships, the realism and diversity of the
generated images are not satisfactory and remain to be
improved. To reduce the diculty of synthesizing com-
plex scenes, current models may bene t from leverag-
ing di erent methods such as combining retrieval-and-
composition with deep learning, and relationship learn-
ing which uses relation graphs as auxiliary input or in-
termediate step.
For image-like inputs, one can take a traditional
retrieval-and-composition strategy or adopt the more
recent deep learning based methods. The retrieval-
and-composition strategy has several advantages. First,
its outputs contain fewer artifacts because the objects
are retrieved rather than synthesized. Second, it is
more user-friendly, since it allows user intervention in
all stages of the work ow, which brings controllability
and customizability. Third, it can be directly applied
to a new dataset, without the need for time-consuming
training or adaptation. In comparison, deep learning
based methods are less interpretable and more dicult
to accept user intervention in all stages of the synthesis
process. Although some attempts in combining the ad-
vantages of the two approaches have been made [96],
deep-learning based methods still dominate for their
versatility and ability to generate completely novel im-
ages. In these deep learning based methods, inputs
are usually represented as regular grid structures like
rasterized images (e.g. for sketches) or multi-channel
tensors (e.g. for poses, semantic maps), for the conve-
nience of utilizing convolution based neural networks.
Methods for di erent input types also have their own
emphases. Works for sketch-based synthesis have at-
tempted to bridge the gap between synthesized sketches
and real free-hand sketches, because the latter is hard
to collect and synthesized sketches can be used to sat-
isfy the needs of training large networks. For synthe-
sis based on semantic maps, progress has been made
mainly on the design of network architectures in or-
der to better utilize information in the input seman-tic maps. For pose-based synthesis, various solutions
are proposed to address problems caused by large de-
formations between source and target poses, including
performing explicit transformations, learning pixel-level
correspondence, and synthesizing through a sequence
of mild deformations. E orts have also been made to
alleviate the need for ground-truth data in supervised
learning settings. Take pose-based synthesis for exam-
ple, the supervised setting requires multiple images of
the same person with the same background but di erent
poses; however, what we often have is an image collec-
tion with only one image for each person. Some meth-
ods [95, 111, 73] are proposed to work under an unsuper-
vised setting, where no ground-truth of the synthesized
result is needed; they mainly work by constraining cy-
cle consistency, with extra supervision for intermediate
outputs.
For image synthesis with visual attributes, applica-
tions in the reviewed works have been mainly on face
synthesis, person synthesis, and fashion design. Since
attributes are an intuitive type of user input suitable
for interactive synthesis, we believe that more appli-
cations should be explored and more advanced mod-
els can be proposed. One bottleneck for current visual
attribute based synthesis tasks is that attribute-level
annotations are often required for supervised training.
For datasets with no attribute-level annotations, unsu-
pervised attribute disentanglement or attribute-related
prior knowledge need to be incorporated into the model
design to guarantee that the generated images have the
correct attributes.
Image synthesis with graphs as input can better en-
code relationships between objects than using other in-
tuitive user inputs. Current works often rely on graph
neural networks [53, 119] to learn the graph and node
features. In addition to using graphs as input, current
methods also try to generate scene graphs as intermedi-
ate output from other modalities of input such as text
descriptions. Applications of using graphs as intuitive
input include architecture design and scene synthesis
that require the preservation of speci c object relation-
ships. While fewer works have been done for image syn-
thesis with graphs, we believe it has great potential in
advancing techniques capable of generating scenes with
multiple objects, complex relationships, and structural
constraints.
5.3 Summary on Benchmark Datasets
To facilitate the lookup of datasets available for par-
ticular tasks or particular types of input, we summa-
rize popular datasets used for various image synthesis
tasks with intuitive user inputs in Table 1. State-of-
the-art image synthesis methods have achieved high-
quality results using datasets containing single object
15Dataset name # images Categories Annotations Tasks Used in
Shoe V2 [139] 8,648ashoe P SK [73]
Stanford's Cars [55] 16,185 car L,BB SK [77]
UT Zappos50K [137, 138] 50,025 shoe L,P SK [27]
Caltech-UCSD Birds 200 [128] 6,033 bird L,A,BB,S TE, SK [135, 100, 143, 144, 133, 146, 6, 136, 152, 98, 77]
Oxford-102 [90] 8,189 ower L TE [100, 143, 144, 133, 146, 152, 98]
Labeled Faces in the Wild [39] 13,233 face L,S AT [135, 140]
CelebA [76] 202,599 face L,A,KP SK, AT [140, 97, 33, 77, 6]
CelebA-HQ [49] 30,000 face L,A,KP SK, AT [52, 94, 68]
Sketchy [105] 87,971bobjects L,P SK [16]
CUHK Face Sketch [125] 1212cface P SK [30, 106, 131]
COCO [72] 330,000 objects BB,S,KP,T TE,SK,SE [81, 143, 144, 133, 146, 37, 63, 113, 136, 152, 98, 36, 120, 3]
COCO-Stu [8] 164,000 objects S,C SK,SE,SG,LA [147, 46, 25, 93, 74]
CelebAMask-HQ [59] 30,000 face S SE [153]
Cityscapes [18] 25,000 outdoor scene S SE [13, 96, 93, 74, 155, 114, 153]
ADE20K [149, 150] 22,210 indoor scene S SE [96, 93, 74, 155, 114, 153]
NYU Depth [89] 1,449 indoor scene S,D SE [13, 96]
Chictopia10K [69, 70] 17,706 human S SE [58]
DeepFashion [75] 52,712 human L,A,P,KP SE,P,AT [155, 79, 80, 107, 95, 22, 65, 111, 154, 83]
Market-1501 [148] 32,668 human L,A P [79, 80, 107, 22, 65, 111, 154]
Human3.6M [42] 3,600,000 human KP,BB,S,SC P [19]
Visual Genome [56] 108,077 objects BB,A,R,T,VQA SG,LA [147, 46]
a2,000 real images and 6,648 sketches.
b12,500 real images and 75,471 sketches.
c606 pairs of real face photo and the corresponding sketch.
Table 1: Commonly used datasets in image synthesis tasks with intuitive user inputs. For annotations, possible
values are Label,Attribute, Pair,KeyPoint,Bounding Box,Semantic map, Relationship, Text,VisualQuestion
Answers, Depth map, 3D SCan. For tasks, possible values are TExt,Pose,SKetch,SEmantic map, ATtributes,
SceneGraph, LAyout.
categories such as cars [55], birds [128], and human
faces [76, 49, 125, 59]. For synthesizing images that
contain multiple object categories and complex scene
structures, there is still room for improvement using
datasets such as the MS-COCO [72]. Future work can
also focus more on synthesis with intuitive and interac-
tive user inputs, as well as applications of the synthesis
methods in real-world scenarios.
6 New Perspectives
Having reviewed recent works for image synthesis given
intuitive inputs, we discuss in this section new perspec-
tives on future research that relate to input versatility,
generation methodology, benchmark datasets and eval-
uation metrics.
6.1 Input Versatility
Text to Image. While current methods for text-to-
image synthesis mainly take text inputs that describe
the visual content of an image, more natural inputs of-
ten contain a ective words such as happy or pleasing,
scary or frightful. To handle such inputs, it is necessary
for models to consider the emotional e ects as part of
the input text comprehension. Further, generating im-
ages that express or incur a certain sentiment will re-
quire learning the mapping between visual content and
emotional dimensions such as valence (i.e. positive ornegative a ectivity) and arousal (i.g. how calming or
exciting the information is), as well as understanding
how di erent compositions of the same objects in an
image can lead to di erent sentiments.
For particular application domains, input text de-
scriptions may be more versatile. For instance, in med-
ical image synthesis, a given input can be a clinical re-
port that contains one or several paragraphs of text de-
scription. Such domain-speci c inputs also require prior
knowledge for input text comprehension and text-to-
image mapping. Other under-explored applications in-
clude taking paragraphs or multiple sentences as input
to generate a sequence of images for story telling [66],
or text-based video synthesis and editing [92, 67, 122].
For conditional synthesis, most current works per-
form one-to-many generation and try to improve the
diversity of images generated given the same text in-
put. One interesting work for text-to-image synthesis
by Yin et al. proposes SD-GAN [136] which investigates
the variability among di erent inputs intended for the
same target image. New applications may be discovered
that need methods for many-to-one synthesis using sim-
ilar pipelines.
Image from sketch, pose, graphic inputs, and
others. For sketches and poses as user inputs, exist-
ing methods treat them as rasterized images to perform
an image-to-image translation as the synthesis method.
Considering that sketches and poses all contain geom-
etry information and the relationships among di erent
16points on the geometry are important, we believe it
is bene cial to investigate representing such inputs as
sparse vectorized representations such as graphs, in-
stead of using rasterized representations. Taking vec-
torized inputs will greatly reduce the input sizes and
will also enable the use of existing graph understanding
techniques such as graph neural networks. For sketches
as input, another interesting task is to generate videos
from sketch-based storyboards, since it has numerous
applications in animation and visualization.
For graphic inputs that represent architectural struc-
tures such as layouts and wireframes, an important con-
sideration is that the synthesized images should pre-
serve structural constraints such as junctions, paral-
lel lines, and planar surfaces [134] or relations between
graphical elements [61]. In these scenarios, incorporat-
ing prior knowledge about the physical world can help
enhance the photorealism of generated images and im-
prove the structural coherence of generated designs.
It will also be interesting to further investigate im-
age and/or video generation from other forms of in-
puts. Audio, for instance, is another intuitive, interac-
tive and expressive type of input. Generating photo-
realistic video portraits that are in synch with input
audio streams [9, 151, 129] has many applications such
as assisting the hearing impaired with speech compre-
hension, privacy-preserving video chat, and VR/AR for
training professionals.
6.2 Connections and Integration be-
tween Generation Paradigms
In conditional image synthesis, deep learning based
methods have been dominating and have shown promis-
ing results. However, they still have limitations includ-
ing the requirement of large training datasets and high
computational cost for training. Since the retrieval-
and-composition methods are often light-weight and re-
quire little training, they can be complementary to the
deep learning based methods. Existing works on im-
age synthesis from semantic maps have explored the
strategy of combining retrieval-and-composition and
learning-based models [96]. One way of combination
could be using retrieval-and-composition to generate a
draft image and then re ning the image for better visual
quality and diversity using a learning-based approach.
Besides the quality of generated images, the control-
lability of the output and the interpretability of the
model also play essential roles in the synthesis pro-
cess. Although GAN models generally achieve better
image quality than other methods, it is often more dif-
cult to perform interactive or controllable generation
using GAN methods than other learning based meth-
ods. Hybrid models such as the combination of GANs
and VAEs [57, 85, 4, 19] have shown promising synthesisresults as well as better feature disentanglement prop-
erties. Future works in image synthesis given intuitive
user input can explore more possibilities of using hybrid
models combining the advantages of GANs and VAEs
such as in [19] as well as using normalizing ow based
methods [102, 52] that allow both feature learning and
tractable marginal likelihood estimation.
Overall, we believe cross pollination between major
image generation paradigms will continue to be an im-
portant direction, which can produce new models that
improve upon existing image synthesis paradigms by
combining their merits and overcoming their limita-
tions.
6.3 Evaluation and comparison of gen-
eration methods
Evaluation Metrics. While a range of quantitative
metrics for measuring the realism and diversity of gen-
erated images have been proposed including widely used
IS [103], FID [34], and SSIM [126], they are still lack-
ing in consistency with human perception and that is
why many works still rely on qualitative human eval-
uation to assess the quality of images synthesized by
di erent methods. Recently, some metrics, such as R-
precision [133] and SOA score [36] in text-to-image syn-
thesis, have been proposed to evaluate whether a gen-
erated image is well conditioned on the given input and
try to achieve better consistency with human percep-
tion. Further work on automatic metrics that match
well with human evaluation will continue to be impor-
tant.
For a speci c task or application, evaluation should
be based on not just the nal image quality but how well
the generated images match the conditional input and
serve the purpose of the intended application or task. If
the synthesized images are used in down-stream tasks
such as data augmentation for classi cation, evaluation
based on down-stream tasks also provides valuable in-
formation.
While it is dicult to compare methods across input
types due to di erences in input modality and interac-
tivity, it is feasible to establish standard processes for
synthesis from a particular kind of input, thus making
it possible for fair comparison between methods given
the same type of input using the same benchmark.
Datasets. As shown in Sec. 5.3, large-scale datasets
of natural images and annotations have been collected
for speci c object categories such as human bodies,
faces, birds, cars, and for scenes that contain multi-
ple object categories such as those in COCO [72] and
CityScapes [18]. As future work, in order to enable ap-
plications in particular domains that bene t from image
synthesis such as medical image synthesis for data aug-
mentation and movie video generation, domain-speci c
17datasets with appropriate annotations will need to be
created.
Evaluation of input choices. Existing image gen-
eration methods have been evaluated and compared
mainly based on their output, i.e. the generated images.
We believe that in image generation tasks conditioned
on intuitive inputs, it is equally important to compare
methods based on their input choice. In Sec. 2.1, we
introduced several characteristics that can be used to
compare and evaluate inputs such as their accessibil-
ity, expressiveness, and interactivity. It will be inter-
esting to study other important characteristics of in-
puts as well as criteria for evaluating how well an input
type meets the needs of an application, how well the
input supports interactive editing, how regularized the
learned latent space is, and how well the synthesized
image matches the input condition.
7 Conclusions
This review has covered main approaches for image syn-
thesis and rendering given intuitive user inputs. First,
we examine what makes a good paradigm for image
synthesis from intuitive user input, from the perspec-
tive of user input characteristics and that of output im-
age quality. We then provide an overview of main gen-
eration paradigms: retrieval and composition, cGAN,
cVAE, and hybrid models, autoregressive models, nor-
malizing ow based methods. Their relative strengths
and weaknesses are discussed in hope of inspiring ideas
that draw connections between the main approaches to
produce models and methods that take advantage of the
relative strengths of each paradigm. After the overview,
we delve into details of speci c algorithms for di erent
input types and examine their ideas and contributions.
In particular, we conduct a comprehensive literature
review on approaches for generating images from text,
sketches or strokes, semantic label maps, poses, visual
attributes, graphs and layouts. Then, we summarize
these existing methods in terms of benchmark datasets
used and identify trends related to advances in model
architecture design and training strategy, and strategies
for handling speci c input types. Last but not least, we
provide our perspective on future directions related to
input versatility, generation methodology, benchmark
datasets, and method evaluation and comparison.
References
[1] Martin Arjovsky, Soumith Chintala, and L eon
Bottou. Wasserstein generative adversarial net-
works. In Proceedings of the 34th Interna-
tional Conference on Machine Learning-Volume
70, pages 214{223, 2017.[2] Guha Balakrishnan, Amy Zhao, Adrian V Dalca,
Fredo Durand, and John Guttag. Synthesizing
images of humans in unseen poses. In Proceedings
of the IEEE Conference on Computer Vision and
Pattern Recognition , pages 8340{8348, 2018.
[3] Aayush Bansal, Yaser Sheikh, and Deva Ra-
manan. Shapes and context: In-the-wild image
synthesis & manipulation. In Proceedings of the
IEEE Conference on Computer Vision and Pat-
tern Recognition , pages 2317{2326, 2019.
[4] Jianmin Bao, Dong Chen, Fang Wen, Houqiang
Li, and Gang Hua. Cvae-gan: ne-grained image
generation through asymmetric training. In Pro-
ceedings of the IEEE international conference on
computer vision , pages 2745{2754, 2017.
[5] Serge Belongie, Jitendra Malik, and Jan Puzicha.
Shape context: A new descriptor for shape match-
ing and object recognition. In Advances in neural
information processing systems , pages 831{837,
2001.
[6] Navaneeth Bodla, Gang Hua, and Rama Chel-
lappa. Semi-supervised fusedgan for conditional
image generation. In Proceedings of the European
Conference on Computer Vision (ECCV) , pages
669{683, 2018.
[7] Holger Caesar, Jasper Uijlings, and Vittorio Fer-
rari. Coco-stu : Thing and stu classes in con-
text. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition , pages
1209{1218, 2018.
[8] Holger Caesar, Jasper Uijlings, and Vittorio Fer-
rari. Coco-stu : Thing and stu classes in con-
text. In Computer vision and pattern recognition
(CVPR), 2018 IEEE conference on . IEEE, 2018.
[9] Lele Chen, Ross K Maddox, Zhiyao Duan, and
Chenliang Xu. Hierarchical cross-modal talk-
ing face generation with dynamic pixel-wise loss.
InProceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition , pages
7832{7841, 2019.
[10] Liang-Chieh Chen, George Papandreou, Iasonas
Kokkinos, Kevin Murphy, and Alan L Yuille.
Deeplab: Semantic image segmentation with deep
convolutional nets, atrous convolution, and fully
connected crfs. IEEE transactions on pattern
analysis and machine intelligence , 40(4):834{848,
2017.
[11] Liang-Chieh Chen, Yukun Zhu, George Pa-
pandreou, Florian Schro , and Hartwig Adam.
18Encoder-decoder with atrous separable convolu-
tion for semantic image segmentation. In Pro-
ceedings of the European conference on computer
vision (ECCV) , pages 801{818, 2018.
[12] Qifeng Chen and Vladlen Koltun. Photographic
image synthesis with cascaded re nement net-
works. In Proceedings of the IEEE international
conference on computer vision , pages 1511{1520,
2017.
[13] Qifeng Chen and Vladlen Koltun. Photographic
image synthesis with cascaded re nement net-
works. In Proceedings of the IEEE international
conference on computer vision , pages 1511{1520,
2017.
[14] Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong
Xia, and Hongbo Fu. Deepfacedrawing: Deep
generation of face images from sketches. ACM
Transactions on Graphics (TOG) , 39(4):72{1,
2020.
[15] Tao Chen, Ming-Ming Cheng, Ping Tan, Ariel
Shamir, and Shi-Min Hu. Sketch2photo: Internet
image montage. ACM transactions on graphics
(TOG) , 28(5):1{10, 2009.
[16] Wengling Chen and James Hays. Sketchygan: To-
wards diverse and realistic sketch to image syn-
thesis. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition , pages
9416{9425, 2018.
[17] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-
Woo Ha, Sunghun Kim, and Jaegul Choo. Star-
gan: Uni ed generative adversarial networks for
multi-domain image-to-image translation. In Pro-
ceedings of the IEEE conference on computer vi-
sion and pattern recognition , pages 8789{8797,
2018.
[18] Marius Cordts, Mohamed Omran, Sebastian
Ramos, Timo Rehfeld, Markus Enzweiler, Ro-
drigo Benenson, Uwe Franke, Stefan Roth, and
Bernt Schiele. The cityscapes dataset for seman-
tic urban scene understanding. In Proceedings of
the IEEE conference on computer vision and pat-
tern recognition , pages 3213{3223, 2016.
[19] Rodrigo De Bem, Arnab Ghosh, Adnane
Boukhayma, Thalaiyasingam Ajanthan, N Sid-
dharth, and Philip Torr. A conditional deep gen-
erative model of people in natural images. In
2019 IEEE Winter Conference on Applications
of Computer Vision (WACV) , pages 1449{1458.
IEEE, 2019.[20] Harm De Vries, Florian Strub, J er emie Mary,
Hugo Larochelle, Olivier Pietquin, and Aaron C
Courville. Modulating early visual processing by
language. In Advances in Neural Information
Processing Systems , pages 6594{6604, 2017.
[21] Laurent Dinh, Jascha Sohl-Dickstein, and Samy
Bengio. Density estimation using real nvp. arXiv
preprint arXiv:1605.08803 , 2016.
[22] Haoye Dong, Xiaodan Liang, Ke Gong, Hanjiang
Lai, Jia Zhu, and Jian Yin. Soft-gated warping-
gan for pose-guided person image synthesis. In
Advances in neural information processing sys-
tems, pages 474{484, 2018.
[23] Mathias Eitz, James Hays, and Marc Alexa. How
do humans sketch objects? ACM Transactions
on graphics (TOG) , 31(4):1{10, 2012.
[24] Mathias Eitz, Ronald Richter, Kristian Hilde-
brand, Tamy Boubekeur, and Marc Alexa. Photo-
sketcher: interactive sketch-based image synthe-
sis. IEEE Computer Graphics and Applications ,
31(6):56{66, 2011.
[25] Chengying Gao, Qi Liu, Qi Xu, Limin Wang,
Jianzhuang Liu, and Changqing Zou. Sketchy-
coco: Image generation from freehand scene
sketches. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recog-
nition , pages 5174{5183, 2020.
[26] Eduardo SL Gastal and Manuel M Oliveira. Do-
main transform for edge-aware image and video
processing. ACM Transactions on Graphics
(TOG) , 30(4):1{12, 2011.
[27] Arnab Ghosh, Richard Zhang, Puneet K Dokania,
Oliver Wang, Alexei A Efros, Philip HS Torr, and
Eli Shechtman. Interactive sketch & ll: Multi-
class sketch-to-image translation. In Proceedings
of the IEEE international conference on computer
vision , pages 1171{1180, 2019.
[28] Ross Girshick. Fast r-cnn. In Proceedings of the
IEEE international conference on computer vi-
sion, pages 1440{1448, 2015.
[29] Ian Goodfellow, Jean Pouget-Abadie, Mehdi
Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Gen-
erative adversarial nets. In Advances in neural
information processing systems , pages 2672{2680,
2014.
[30] Ya gmur G u cl ut urk, Umut G u cl u, Rob van Lier,
and Marcel AJ van Gerven. Convolutional sketch
inversion. In European conference on computer
vision , pages 810{824. Springer, 2016.
19[31] Ishaan Gulrajani, Faruk Ahmed, Martin Ar-
jovsky, Vincent Dumoulin, and Aaron C
Courville. Improved training of wasserstein gans.
InAdvances in neural information processing sys-
tems, pages 5767{5777, 2017.
[32] Gustave V Hahn-Powell and Diana Archangeli.
Autotrace: An automatic system for tracing
tongue contours. The Journal of the Acoustical
Society of America , 136(4):2104{2104, 2014.
[33] Zhenliang He, Wangmeng Zuo, Meina Kan,
Shiguang Shan, and Xilin Chen. Attgan: Fa-
cial attribute editing by only changing what you
want. IEEE Transactions on Image Processing ,
28(11):5464{5478, 2019.
[34] Martin Heusel, Hubert Ramsauer, Thomas Un-
terthiner, Bernhard Nessler, and Sepp Hochreiter.
Gans trained by a two time-scale update rule con-
verge to a local nash equilibrium. In Advances
in neural information processing systems , pages
6626{6637, 2017.
[35] Tobias Hinz, Stefan Heinrich, and Stefan
Wermter. Generating multiple objects at
spatially distinct locations. arXiv preprint
arXiv:1901.00686 , 2019.
[36] Tobias Hinz, Stefan Heinrich, and Stefan
Wermter. Semantic object accuracy for gen-
erative text-to-image synthesis. arXiv preprint
arXiv:1910.13321 , 2019.
[37] Seunghoon Hong, Dingdong Yang, Jongwook
Choi, and Honglak Lee. Inferring semantic lay-
out for hierarchical text-to-image synthesis. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition , pages 7986{7994,
2018.
[38] Shi-Min Hu, Fang-Lue Zhang, Miao Wang,
Ralph R Martin, and Jue Wang. Patchnet: A
patch-based image representation for interactive
library-driven image editing. ACM Transactions
on Graphics (TOG) , 32(6):1{12, 2013.
[39] Gary B. Huang, Manu Ramesh, Tamara Berg,
and Erik Learned-Miller. Labeled faces in the
wild: A database for studying face recognition
in unconstrained environments. Technical Report
07-49, University of Massachusetts, Amherst, Oc-
tober 2007.
[40] Xun Huang and Serge Belongie. Arbitrary style
transfer in real-time with adaptive instance nor-
malization. In Proceedings of the IEEE Inter-
national Conference on Computer Vision , pages
1501{1510, 2017.[41] Xun Huang, Ming-Yu Liu, Serge Belongie, and
Jan Kautz. Multimodal unsupervised image-to-
image translation. In Proceedings of the European
Conference on Computer Vision (ECCV) , pages
172{189, 2018.
[42] Catalin Ionescu, Dragos Papava, Vlad Olaru, and
Cristian Sminchisescu. Human3. 6m: Large scale
datasets and predictive methods for 3d human
sensing in natural environments. IEEE trans-
actions on pattern analysis and machine intelli-
gence , 36(7):1325{1339, 2013.
[43] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and
Alexei A Efros. Image-to-image translation with
conditional adversarial networks. In Proceedings
of the IEEE conference on computer vision and
pattern recognition , pages 1125{1134, 2017.
[44] Oleg Ivanov, Michael Figurnov, and Dmitry
Vetrov. Variational autoencoder with arbitrary
conditioning. In International Conference on
Learning Representations , 2018.
[45] Justin Johnson, Alexandre Alahi, and Li Fei-Fei.
Perceptual losses for real-time style transfer and
super-resolution. In European conference on com-
puter vision , pages 694{711. Springer, 2016.
[46] Justin Johnson, Agrim Gupta, and Li Fei-Fei. Im-
age generation from scene graphs. In Proceedings
of the IEEE conference on computer vision and
pattern recognition , pages 1219{1228, 2018.
[47] Matthew Johnson, Gabriel J Brostow, Jamie
Shotton, Ognjen Arandjelovic, Vivek Kwatra,
and Roberto Cipolla. Semantic photo synthesis.
InComputer Graphics Forum , volume 25, pages
407{413. Wiley Online Library, 2006.
[48] Henry Kang, Seungyong Lee, and Charles K Chui.
Coherent line drawing. In Proceedings of the 5th
international symposium on Non-photorealistic
animation and rendering , pages 43{50, 2007.
[49] Tero Karras, Timo Aila, Samuli Laine, and
Jaakko Lehtinen. Progressive growing of gans for
improved quality, stability, and variation. arXiv
preprint arXiv:1710.10196 , 2017.
[50] Tero Karras, Samuli Laine, and Timo Aila. A
style-based generator architecture for generative
adversarial networks. In Proceedings of the IEEE
conference on computer vision and pattern recog-
nition , pages 4401{4410, 2019.
[51] Diederik P Kingma and Max Welling. Auto-
encoding variational bayes. arXiv preprint
arXiv:1312.6114 , 2013.
20[52] Durk P Kingma and Prafulla Dhariwal. Glow:
Generative ow with invertible 1x1 convolutions.
InAdvances in neural information processing sys-
tems, pages 10215{10224, 2018.
[53] Thomas N Kipf and Max Welling. Semi-
supervised classi cation with graph convolutional
networks. arXiv preprint arXiv:1609.02907 , 2016.
[54] Jack Klys, Jake Snell, and Richard Zemel. Learn-
ing latent subspaces in variational autoencoders.
InAdvances in Neural Information Processing
Systems , pages 6444{6454, 2018.
[55] Jonathan Krause, Michael Stark, Jia Deng, and
Li Fei-Fei. 3d object representations for ne-
grained categorization. In 4th International IEEE
Workshop on 3D Representation and Recognition
(3dRR-13) , Sydney, Australia, 2013.
[56] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin
Johnson, Kenji Hata, Joshua Kravitz, Stephanie
Chen, Yannis Kalantidis, Li-Jia Li, David A
Shamma, et al. Visual genome: Connecting lan-
guage and vision using crowdsourced dense image
annotations. International journal of computer
vision , 123(1):32{73, 2017.
[57] Anders Boesen Lindbo Larsen, Sren Kaae
Snderby, Hugo Larochelle, and Ole Winther. Au-
toencoding beyond pixels using a learned similar-
ity metric. In International conference on ma-
chine learning , pages 1558{1566. PMLR, 2016.
[58] Christoph Lassner, Gerard Pons-Moll, and Pe-
ter V Gehler. A generative model of people
in clothing. In Proceedings of the IEEE Inter-
national Conference on Computer Vision , pages
853{862, 2017.
[59] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping
Luo. Maskgan: Towards diverse and interac-
tive facial image manipulation. In IEEE Con-
ference on Computer Vision and Pattern Recog-
nition (CVPR) , 2020.
[60] Hanbit Lee and Sang-goo Lee. Fashion attributes-
to-image synthesis using attention-based genera-
tive adversarial network. In 2019 IEEE Winter
Conference on Applications of Computer Vision
(WACV) , pages 462{470. IEEE, 2019.
[61] Jianan Li, Jimei Yang, Aaron Hertzmann, Jian-
ming Zhang, and Tingfa Xu. Layoutgan: Gener-
ating graphic layouts with wireframe discrimina-
tors. arXiv preprint arXiv:1901.06767 , 2019.
[62] Mengtian Li, Zhe Lin, Radomir Mech, Ersin
Yumer, and Deva Ramanan. Photo-sketching:Inferring contour drawings from images. In
2019 IEEE Winter Conference on Applications
of Computer Vision (WACV) , pages 1403{1412.
IEEE, 2019.
[63] Wenbo Li, Pengchuan Zhang, Lei Zhang, Qi-
uyuan Huang, Xiaodong He, Siwei Lyu, and Jian-
feng Gao. Object-driven text-to-image synthesis
via adversarial training. In Proceedings of the
IEEE Conference on Computer Vision and Pat-
tern Recognition , pages 12174{12182, 2019.
[64] Yijun Li, Chen Fang, Aaron Hertzmann, Eli
Shechtman, and Ming-Hsuan Yang. Im2pencil:
Controllable pencil illustration from photographs.
InProceedings of the IEEE Conference on Com-
puter Vision and Pattern Recognition , pages
1525{1534, 2019.
[65] Yining Li, Chen Huang, and Chen Change Loy.
Dense intrinsic appearance ow for human pose
transfer. In Proceedings of the IEEE Confer-
ence on Computer Vision and Pattern Recogni-
tion, pages 3693{3702, 2019.
[66] Yitong Li, Zhe Gan, Yelong Shen, Jingjing Liu,
Yu Cheng, Yuexin Wu, Lawrence Carin, David
Carlson, and Jianfeng Gao. Storygan: A sequen-
tial conditional gan for story visualization. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition , pages 6329{6338,
2019.
[67] Yitong Li, Martin Renqiang Min, Dinghan Shen,
David E Carlson, and Lawrence Carin. Video gen-
eration from text. In Proceedings of the AAAI
Conf. on Arti cial Intelligence , 2018.
[68] Yuhang Li, Xuejin Chen, Feng Wu, and Zheng-
Jun Zha. Linestofacephoto: Face photo genera-
tion from lines with conditional self-attention gen-
erative adversarial networks. In Proceedings of the
27th ACM International Conference on Multime-
dia, pages 2323{2331, 2019.
[69] Xiaodan Liang, Si Liu, Xiaohui Shen, Jian-
chao Yang, Luoqi Liu, Jian Dong, Liang Lin,
and Shuicheng Yan. Deep human parsing with
active template regression. IEEE transactions
on pattern analysis and machine intelligence ,
37(12):2402{2414, 2015.
[70] Xiaodan Liang, Chunyan Xu, Xiaohui Shen, Jian-
chao Yang, Si Liu, Jinhui Tang, Liang Lin, and
Shuicheng Yan. Human parsing with contextual-
ized convolutional neural network. In Proceedings
of the IEEE international conference on computer
vision , pages 1386{1394, 2015.
21[71] Jae Hyun Lim and Jong Chul Ye. Geometric gan.
arXiv preprint arXiv:1705.02894 , 2017.
[72] Tsung-Yi Lin, Michael Maire, Serge Belongie,
James Hays, Pietro Perona, Deva Ramanan, Piotr
Doll ar, and C Lawrence Zitnick. Microsoft coco:
Common objects in context. In European confer-
ence on computer vision , pages 740{755. Springer,
2014.
[73] Runtao Liu, Qian Yu, and Stella Yu. Unsuper-
vised sketch-to-photo synthesis. In Proceedings of
the European Conf. on Computer Vision , 2020.
[74] Xihui Liu, Guojun Yin, Jing Shao, Xiaogang
Wang, et al. Learning to predict layout-to-image
conditional convolutions for semantic image syn-
thesis. In Advances in Neural Information Pro-
cessing Systems , pages 570{580, 2019.
[75] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang,
and Xiaoou Tang. Deepfashion: Powering ro-
bust clothes recognition and retrieval with rich
annotations. In Proceedings of IEEE Confer-
ence on Computer Vision and Pattern Recogni-
tion (CVPR) , June 2016.
[76] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xi-
aoou Tang. Deep learning face attributes in the
wild. In Proceedings of International Conference
on Computer Vision (ICCV) , December 2015.
[77] Yongyi Lu, Shangzhe Wu, Yu-Wing Tai, and
Chi-Keung Tang. Image generation from sketch
constraint using contextual gan. In Proceedings
of the European Conference on Computer Vision
(ECCV) , pages 205{220, 2018.
[78] Andrew Luo, Zhoutong Zhang, Jiajun Wu, and
Joshua B. Tenenbaum. End-to-end optimiza-
tion of scene layout. In Proceedings of the
IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) , June 2020.
[79] Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele,
Tinne Tuytelaars, and Luc Van Gool. Pose guided
person image generation. In Advances in neural
information processing systems , pages 406{416,
2017.
[80] Liqian Ma, Qianru Sun, Stamatios Georgoulis,
Luc Van Gool, Bernt Schiele, and Mario Fritz.
Disentangled person image generation. In Pro-
ceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition , pages 99{108, 2018.
[81] Elman Mansimov, Emilio Parisotto, Jimmy Lei
Ba, and Ruslan Salakhutdinov. Generating im-
ages from captions with attention. arXiv preprint
arXiv:1511.02793 , 2015.[82] Xudong Mao, Qing Li, Haoran Xie, Raymond YK
Lau, Zhen Wang, and Stephen Paul Smolley.
Least squares generative adversarial networks. In
Proceedings of the IEEE international conference
on computer vision , pages 2794{2802, 2017.
[83] Yifang Men, Yiming Mao, Yuning Jiang, Wei-
Ying Ma, and Zhouhui Lian. Controllable person
image synthesis with attribute-decomposed gan.
InProceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition , pages
5084{5093, 2020.
[84] Lars Mescheder, Andreas Geiger, and Sebas-
tian Nowozin. Which training methods for
gans do actually converge? arXiv preprint
arXiv:1801.04406 , 2018.
[85] Lars Mescheder, S Nowozin, and Andreas Geiger.
Adversarial variational bayes: Unifying vari-
ational autoencoders and generative adversar-
ial networks. In 34th International Conference
on Machine Learning (ICML) , pages 2391{2400.
PMLR, 2017.
[86] Mehdi Mirza and Simon Osindero. Condi-
tional generative adversarial nets. arXiv preprint
arXiv:1411.1784 , 2014.
[87] Takeru Miyato, Toshiki Kataoka, Masanori
Koyama, and Yuichi Yoshida. Spectral normal-
ization for generative adversarial networks. In
International Conference on Learning Represen-
tations , 2018.
[88] Takeru Miyato and Masanori Koyama. cgans with
projection discriminator. In International Confer-
ence on Learning Representations , 2018.
[89] Pushmeet Kohli Nathan Silberman, Derek Hoiem
and Rob Fergus. Indoor segmentation and sup-
port inference from rgbd images. In Proceedings
of the European Conf. on Computer Vision , 2012.
[90] Maria-Elena Nilsback and Andrew Zisserman.
Automated ower classi cation over a large num-
ber of classes. In 2008 Sixth Indian Conference on
Computer Vision, Graphics & Image Processing ,
pages 722{729. IEEE, 2008.
[91] Augustus Odena, Christopher Olah, and
Jonathon Shlens. Conditional image synthesis
with auxiliary classi er gans. In Interna-
tional conference on machine learning , pages
2642{2651, 2017.
[92] Yingwei Pan, Zhaofan Qiu, Ting Yao, Houqiang
Li, and Tao Mei. To create what you tell: Gener-
ating videos from captions. In Proceedings of the
2225th ACM international conference on Multime-
dia, pages 1789{1798, 2017.
[93] Taesung Park, Ming-Yu Liu, Ting-Chun Wang,
and Jun-Yan Zhu. Semantic image synthesis with
spatially-adaptive normalization. In Proceedings
of the IEEE Conference on Computer Vision and
Pattern Recognition , pages 2337{2346, 2019.
[94] Tiziano Portenier, Qiyang Hu, Attila Szabo,
Siavash Arjomand Bigdeli, Paolo Favaro, and
Matthias Zwicker. Faceshop: Deep sketch-
based face image editing. arXiv preprint
arXiv:1804.08972 , 2018.
[95] Albert Pumarola, Antonio Agudo, Alberto Sanfe-
liu, and Francesc Moreno-Noguer. Unsupervised
person image synthesis in arbitrary poses. In Pro-
ceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition , pages 8620{8628,
2018.
[96] Xiaojuan Qi, Qifeng Chen, Jiaya Jia, and Vladlen
Koltun. Semi-parametric image synthesis. In Pro-
ceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition , pages 8808{8816,
2018.
[97] Shengju Qian, Kwan-Yee Lin, Wayne Wu, Yangx-
iaokang Liu, Quan Wang, Fumin Shen, Chen
Qian, and Ran He. Make a face: Towards ar-
bitrary high delity face manipulation. In Pro-
ceedings of the IEEE International Conference on
Computer Vision , pages 10033{10042, 2019.
[98] Tingting Qiao, Jing Zhang, Duanqing Xu, and
Dacheng Tao. Mirrorgan: Learning text-to-image
generation by redescription. In Proceedings of the
IEEE Conference on Computer Vision and Pat-
tern Recognition , pages 1505{1514, 2019.
[99] Alec Radford, Luke Metz, and Soumith Chin-
tala. Unsupervised representation learning with
deep convolutional generative adversarial net-
works. arXiv preprint arXiv:1511.06434 , 2015.
[100] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanu-
gen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text to image synthesis. In
International Conference on Machine Learning ,
pages 1060{1069, 2016.
[101] Scott E Reed, Zeynep Akata, Santosh Mohan,
Samuel Tenka, Bernt Schiele, and Honglak Lee.
Learning what and where to draw. In Advances
in neural information processing systems , pages
217{225, 2016.[102] Danilo Rezende and Shakir Mohamed. Varia-
tional inference with normalizing ows. In Inter-
national Conference on Machine Learning , pages
1530{1538, 2015.
[103] Tim Salimans, Ian Goodfellow, Wojciech
Zaremba, Vicki Cheung, Alec Radford, and
Xi Chen. Improved techniques for training gans.
InAdvances in neural information processing
systems , pages 2234{2242, 2016.
[104] Tim Salimans, Andrej Karpathy, Xi Chen, and
Diederik P Kingma. Pixelcnn++: Improving the
pixelcnn with discretized logistic mixture likeli-
hood and other modi cations. arXiv preprint
arXiv:1701.05517 , 2017.
[105] Patsorn Sangkloy, Nathan Burnell, Cusuh Ham,
and James Hays. The sketchy database: learning
to retrieve badly drawn bunnies. ACM Transac-
tions on Graphics (TOG) , 35(4):1{12, 2016.
[106] Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher
Yu, and James Hays. Scribbler: Controlling deep
image synthesis with sketch and color. In Proceed-
ings of the IEEE Conference on Computer Vision
and Pattern Recognition , pages 5400{5409, 2017.
[107] Aliaksandr Siarohin, Enver Sangineto, St ephane
Lathuiliere, and Nicu Sebe. Deformable gans for
pose-based human image generation. In Proceed-
ings of the IEEE Conference on Computer Vision
and Pattern Recognition , pages 3408{3416, 2018.
[108] Edgar Simo-Serra, Satoshi Iizuka, Kazuma
Sasaki, and Hiroshi Ishikawa. Learning to sim-
plify: fully convolutional networks for rough
sketch cleanup. ACM Transactions on Graphics
(TOG) , 35(4):1{11, 2016.
[109] Kihyuk Sohn, Honglak Lee, and Xinchen Yan.
Learning structured output representation using
deep conditional generative models. In Advances
in neural information processing systems , pages
3483{3491, 2015.
[110] Shuran Song, Fisher Yu, Andy Zeng, Angel X
Chang, Manolis Savva, and Thomas Funkhouser.
Semantic scene completion from a single depth
image. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition , pages
1746{1754, 2017.
[111] Sijie Song, Wei Zhang, Jiaying Liu, and Tao Mei.
Unsupervised person image generation with se-
mantic parsing transformation. In Proceedings of
the IEEE Conference on Computer Vision and
Pattern Recognition , pages 2357{2366, 2019.
23[112] Christian Szegedy, Vincent Vanhoucke, Sergey
Io e, Jon Shlens, and Zbigniew Wojna. Rethink-
ing the inception architecture for computer vision.
InProceedings of the IEEE conference on com-
puter vision and pattern recognition , pages 2818{
2826, 2016.
[113] Fuwen Tan, Song Feng, and Vicente Or-
donez. Text2scene: Generating compositional
scenes from textual descriptions. arXiv preprint
arXiv:1809.01110 , 2018.
[114] Hao Tang, Dan Xu, Yan Yan, Philip HS Torr,
and Nicu Sebe. Local class-speci c and global
image-level generative adversarial networks for
semantic-guided scene generation. In Proceedings
of the IEEE/CVF Conference on Computer Vi-
sion and Pattern Recognition , pages 7870{7879,
2020.
[115] Daniyar Turmukhambetov, Neill DF Campbell,
Dan B Goldman, and Jan Kautz. Interac-
tive sketch-driven image synthesis. In Computer
Graphics Forum , volume 34, pages 130{142. Wi-
ley Online Library, 2015.
[116] Arash Vahdat and Jan Kautz. NVAE: A deep
hierarchical variational autoencoder. In Neural
Information Processing Systems (NeurIPS) , 2020.
[117] Aaron Van den Oord, Nal Kalchbrenner, Lasse
Espeholt, Oriol Vinyals, Alex Graves, et al. Con-
ditional image generation with pixelcnn decoders.
InAdvances in neural information processing sys-
tems, pages 4790{4798, 2016.
[118] Aaron Van Oord, Nal Kalchbrenner, and Koray
Kavukcuoglu. Pixel recurrent neural networks. In
International Conference on Machine Learning ,
pages 1747{1756, 2016.
[119] Petar Veli ckovi c, Guillem Cucurull, Arantxa
Casanova, Adriana Romero, Pietro Li o, and
Yoshua Bengio. Graph attention networks. In
International Conference on Learning Represen-
tations , 2018.
[120] Jingyu Wang, Yu Zhao, Qi Qi, Qiming Huo, Jian
Zou, Ce Ge, and Jianxin Liao. Mindcamera: In-
teractive sketch-based image retrieval and synthe-
sis.IEEE Access , 6:3765{3773, 2018.
[121] Miao Wang, Xu-Quan Lyu, Yi-Jun Li, and Fang-
Lue Zhang. Vr content creation and exploration
with deep learning: A survey. Computational Vi-
sual Media , 6(1):3{28, 2020.
[122] Miao Wang, Guo-Wei Yang, Shi-Min Hu, Shing-
Tung Yau, and Ariel Shamir. Write-a-video:computational video montage from themed text.
ACM Transactions on Graphics , 38(6):177{1,
2019.
[123] Miao Wang, Guo-Ye Yang, Ruilong Li, Run-Ze
Liang, Song-Hai Zhang, Peter M Hall, and Shi-
Min Hu. Example-guided style-consistent image
synthesis from semantic labeling. In Proceedings
of the IEEE/CVF Conference on Computer Vi-
sion and Pattern Recognition , pages 1495{1504,
2019.
[124] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu,
Andrew Tao, Jan Kautz, and Bryan Catanzaro.
High-resolution image synthesis and semantic ma-
nipulation with conditional gans. In Proceedings
of the IEEE conference on computer vision and
pattern recognition , pages 8798{8807, 2018.
[125] Xiaogang Wang and Xiaoou Tang. Face photo-
sketch synthesis and recognition. IEEE trans-
actions on pattern analysis and machine intelli-
gence , 31(11):1955{1967, 2008.
[126] Zhou Wang, Alan C Bovik, Hamid R Sheikh,
and Eero P Simoncelli. Image quality assessment:
from error visibility to structural similarity. IEEE
transactions on image processing , 13(4):600{612,
2004.
[127] Zhou Wang, Eero P Simoncelli, and Alan C
Bovik. Multiscale structural similarity for image
quality assessment. In The Thrity-Seventh Asilo-
mar Conference on Signals, Systems & Comput-
ers, 2003 , volume 2, pages 1398{1402. Ieee, 2003.
[128] P. Welinder, S. Branson, T. Mita, C. Wah,
F. Schro , S. Belongie, and P. Perona. Caltech-
UCSD Birds 200. Technical Report CNS-
TR-2010-001, California Institute of Technology,
2010.
[129] Xin Wen, Miao Wang, Christian Richardt, Ze-
Yin Chen, and Shi-Min Hu. Photorealistic audio-
driven video portraits. IEEE Transactions on Vi-
sualization and Computer Graphics , 26(12):3457{
3466, 2020.
[130] Holger Winnem oller, Jan Eric Kyprianidis, and
Sven C Olsen. Xdog: an extended di erence-of-
gaussians compendium including advanced image
stylization. Computers & Graphics , 36(6):740{
753, 2012.
[131] Weihao Xia, Yujiu Yang, and Jing-Hao Xue. Cali-
sketch: Stroke calibration and completion for
high-quality face image generation from poorly-
drawn sketches. arXiv preprint arXiv:1911.00426 ,
2019.
24[132] Saining Xie and Zhuowen Tu. Holistically-nested
edge detection. In Proceedings of the IEEE in-
ternational conference on computer vision , pages
1395{1403, 2015.
[133] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han
Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong
He. Attngan: Fine-grained text to image genera-
tion with attentional generative adversarial net-
works. In Proceedings of the IEEE conference
on computer vision and pattern recognition , pages
1316{1324, 2018.
[134] Yuan Xue, Zihan Zhou, and Xiaolei Huang. Neu-
ral wireframe renderer: Learning wireframe to im-
age translations. In 2020 European Conference on
Computer Vision , 2020.
[135] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and
Honglak Lee. Attribute2image: Conditional im-
age generation from visual attributes. In Euro-
pean Conference on Computer Vision , pages 776{
791. Springer, 2016.
[136] Guojun Yin, Bin Liu, Lu Sheng, Nenghai Yu, Xi-
aogang Wang, and Jing Shao. Semantics disen-
tangling for text-to-image generation. In Proceed-
ings of the IEEE Conference on Computer Vision
and Pattern Recognition , pages 2327{2336, 2019.
[137] A. Yu and K. Grauman. Fine-grained visual com-
parisons with local learning. In Computer Vision
and Pattern Recognition (CVPR) , Jun 2014.
[138] A. Yu and K. Grauman. Semantic jitter: Dense
supervision for visual comparisons via synthetic
images. In International Conference on Computer
Vision (ICCV) , Oct 2017.
[139] Qian Yu, Feng Liu, Yi-Zhe SonG, Tao Xiang,
Timothy Hospedales, and Chen Change Loy.
Sketch me that shoe. In Computer Vision and
Pattern Recognition , 2016.
[140] Gang Zhang, Meina Kan, Shiguang Shan, and
Xilin Chen. Generative adversarial network with
spatial attention for face attribute editing. In Pro-
ceedings of the European conference on computer
vision (ECCV) , pages 417{432, 2018.
[141] Han Zhang, Ian Goodfellow, Dimitris Metaxas,
and Augustus Odena. Self-attention generative
adversarial networks. In International conference
on machine learning , pages 7354{7363. PMLR,
2019.
[142] Han Zhang, Jing Yu Koh, Jason Baldridge,
Honglak Lee, and Yinfei Yang. Cross-modal
contrastive learning for text-to-image generation.
arXiv preprint arXiv:2101.04702 , 2021.[143] Han Zhang, Tao Xu, Hongsheng Li, Shaoting
Zhang, Xiaogang Wang, Xiaolei Huang, and Dim-
itris N Metaxas. Stackgan: Text to photo-realistic
image synthesis with stacked generative adversar-
ial networks. In Proceedings of the IEEE interna-
tional conference on computer vision , pages 5907{
5915, 2017.
[144] Han Zhang, Tao Xu, Hongsheng Li, Shaoting
Zhang, Xiaogang Wang, Xiaolei Huang, and Dim-
itris N Metaxas. Stackgan++: Realistic image
synthesis with stacked generative adversarial net-
works. IEEE transactions on pattern analysis and
machine intelligence , 41(8):1947{1962, 2018.
[145] Richard Zhang, Phillip Isola, Alexei A Efros, Eli
Shechtman, and Oliver Wang. The unreasonable
e ectiveness of deep features as a perceptual met-
ric. In Proceedings of the IEEE conference on
computer vision and pattern recognition , pages
586{595, 2018.
[146] Zizhao Zhang, Yuanpu Xie, and Lin Yang.
Photographic text-to-image synthesis with a
hierarchically-nested adversarial network. In Pro-
ceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition , pages 6199{6208,
2018.
[147] Bo Zhao, Lili Meng, Weidong Yin, and Leonid Si-
gal. Image generation from layout. In Proceedings
of the IEEE Conference on Computer Vision and
Pattern Recognition , pages 8584{8593, 2019.
[148] Liang Zheng, Liyue Shen, Lu Tian, Shengjin
Wang, Jingdong Wang, and Qi Tian. Scalable
person re-identi cation: A benchmark. In IEEE
International Conference on Computer Vision ,
2015.
[149] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja
Fidler, Adela Barriuso, and Antonio Torralba.
Semantic understanding of scenes through the
ade20k dataset. arXiv preprint arXiv:1608.05442 ,
2016.
[150] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja
Fidler, Adela Barriuso, and Antonio Torralba.
Scene parsing through ade20k dataset. In Pro-
ceedings of the IEEE Conference on Computer
Vision and Pattern Recognition , 2017.
[151] Hang Zhou, Yu Liu, Ziwei Liu, Ping Luo, and Xi-
aogang Wang. Talking face generation by adver-
sarially disentangled audio-visual representation.
InProceedings of the AAAI Conference on Ar-
ti cial Intelligence , volume 33, pages 9299{9306,
2019.
25[152] Minfeng Zhu, Pingbo Pan, Wei Chen, and
Yi Yang. Dm-gan: Dynamic memory genera-
tive adversarial networks for text-to-image syn-
thesis. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition , pages
5802{5810, 2019.
[153] Peihao Zhu, Rameen Abdal, Yipeng Qin, and Pe-
ter Wonka. Sean: Image synthesis with semantic
region-adaptive normalization. In Proceedings of
the IEEE/CVF Conference on Computer Vision
and Pattern Recognition , pages 5104{5113, 2020.
[154] Zhen Zhu, Tengteng Huang, Baoguang Shi, Miao
Yu, Bofei Wang, and Xiang Bai. Progressive pose
attention transfer for person image generation. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition , pages 2347{2356,
2019.
[155] Zhen Zhu, Zhiliang Xu, Ansheng You, and Xiang
Bai. Semantically multi-modal image synthesis.
InProceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition , pages
5467{5476, 2020.
26