AICurious Logo

What is: Wasserstein GAN (Gradient Penalty)?

SourceImproved Training of Wasserstein GANs
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Wasserstein GAN + Gradient Penalty, or WGAN-GP, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity.

The original WGAN uses weight clipping to achieve 1-Lipschitz functions, but this can lead to undesirable behaviour by creating pathological value surfaces and capacity underuse, as well as gradient explosion/vanishing without careful tuning of the weight clipping parameter cc.

A Gradient Penalty is a soft version of the Lipschitz constraint, which follows from the fact that functions are 1-Lipschitz iff the gradients are of norm at most 1 everywhere. The squared difference from norm 1 is used as the gradient penalty.