AICurious Logo

What is: R1 Regularization?

SourceWhich Training Methods for GANs do actually Converge?
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

R_INLINE_MATH_1 Regularization is a regularization technique and gradient penalty for training generative adversarial networks. It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the GAN game.

This leads to the following regularization term:

R_1(ψ)=γ2E_p_D(x)[D_ψ(x)2]R\_{1}\left(\psi\right) = \frac{\gamma}{2}E\_{p\_{D}\left(x\right)}\left[||\nabla{D\_{\psi}\left(x\right)}||^{2}\right]