File size: 1,372 Bytes
16329a2
 
 
 
 
 
722506e
 
3a26338
87cfcea
08995a3
4c6e522
16329a2
3a26338
6e132d1
 
4c6e522
16329a2
4a51070
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
library_name: keras
tags:
- GAN
---

## Generative Adversarial Network

This repo contains the model and the notebook to this [this Keras example on WGAN](https://keras.io/examples/generative/wgan_gp/).<br>
Full credits to: [A_K_Nain](https://twitter.com/A_K_Nain)<br>
Space link : [Demo](https://huggingface.co./spaces/IMvision12/WGAN-GP)
## Wasserstein GAN (WGAN) with Gradient Penalty (GP)

Original Paper Of WGAN : [Paper](https://arxiv.org/abs/1701.07875)<br>
Wasserstein GANs With with Gradient Penalty : [Paper](https://arxiv.org/abs/1704.00028)

The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.

The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.