IMvision12
commited on
Commit
•
4a51070
1
Parent(s):
6e132d1
Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,4 @@ Wasserstein GANs With with Gradient Penalty : [Paper](https://arxiv.org/abs/1704
|
|
16 |
|
17 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
18 |
|
19 |
-
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|
20 |
-
## Intended uses & limitations
|
21 |
-
|
22 |
-
|
23 |
-
<details>
|
24 |
-
</details>
|
|
|
16 |
|
17 |
The original Wasserstein GAN leverages the Wasserstein distance to produce a value function that has better theoretical properties than the value function used in the original GAN paper. WGAN requires that the discriminator (aka the critic) lie within the space of 1-Lipschitz functions. The authors proposed the idea of weight clipping to achieve this constraint. Though weight clipping works, it can be a problematic way to enforce 1-Lipschitz constraint and can cause undesirable behavior, e.g. a very deep WGAN discriminator (critic) often fails to converge.
|
18 |
|
19 |
+
The WGAN-GP method proposes an alternative to weight clipping to ensure smooth training. Instead of clipping the weights, the authors proposed a "gradient penalty" by adding a loss term that keeps the L2 norm of the discriminator gradients close to 1.
|
|
|
|
|
|
|
|
|
|