Papers
arxiv:1703.02000

Activation Maximization Generative Adversarial Nets

Published on Mar 6, 2017
Authors:
,
,
,
,
,
,

Abstract

Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how class labels and associated losses influence GAN's training. Based on that, we propose Activation Maximization Generative Adversarial Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been conducted to validate our analysis and evaluate the effectiveness of our solution, where AM-GAN outperforms other strong baselines and achieves state-of-the-art <PRE_TAG><PRE_TAG><PRE_TAG>Inception Score</POST_TAG></POST_TAG></POST_TAG> (8.91) on CIFAR-10. In addition, we demonstrate that, with the Inception ImageNet classifier, <PRE_TAG><PRE_TAG><PRE_TAG>Inception Score</POST_TAG></POST_TAG></POST_TAG> mainly tracks the diversity of the generator, and there is, however, no reliable evidence that it can reflect the true sample quality. We thus propose a new metric, called <PRE_TAG>AM Score</POST_TAG>, to provide a more accurate estimation of the sample quality. Our proposed model also outperforms the baseline methods in the new metric.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1703.02000 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1703.02000 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1703.02000 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.