AICurious Logo

What is: WGAN-GP Loss?

SourceImproved Training of Wasserstein GANs
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Wasserstein Gradient Penalty Loss, or WGAN-GP Loss, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples x^P_x^\mathbf{\hat{x}} \sim \mathbb{P}\_{\hat{\mathbf{x}}} to achieve Lipschitz continuity:

L=E_x^P_g[D(x~)]E_xP_r[D(x)]+λE_x^P_x^[(_x~D(x~)_21)2] L = \mathbb{E}\_{\mathbf{\hat{x}} \sim \mathbb{P}\_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right] - \mathbb{E}\_{\mathbf{x} \sim \mathbb{P}\_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}\_{\mathbf{\hat{x}} \sim \mathbb{P}\_{\hat{\mathbf{x}}}}\left[\left(||\nabla\_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)||\_{2}-1\right)^{2}\right]

It was introduced as part of the WGAN-GP overall model.