AICurious Logo

What is: Weight Standardization?

SourceMicro-Batch Training with Batch-Channel Normalization and Weight Standardization
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Weight Standardization is a normalization technique that smooths the loss landscape by standardizing the weights in convolutional layers. Different from the previous normalization methods that focus on activations, WS considers the smoothing effects of weights more than just length-direction decoupling. Theoretically, WS reduces the Lipschitz constants of the loss and the gradients. Hence, WS smooths the loss landscape and improves training.

In Weight Standardization, instead of directly optimizing the loss L\mathcal{L} on the original weights W^\hat{W}, we reparameterize the weights W^\hat{W} as a function of WW, i.e. W^=WS(W)\hat{W}=\text{WS}(W), and optimize the loss L\mathcal{L} on WW by SGD:

W^=[W^_i,j  W^_i,j=W_i,jμ_W_i,σ_W_i,+ϵ] \hat{W} = \Big[ \hat{W}\_{i,j}~\big|~ \hat{W}\_{i,j} = \dfrac{W\_{i,j} - \mu\_{W\_{i,\cdot}}}{\sigma\_{W\_{i,\cdot}+\epsilon}}\Big]
y=W^x y = \hat{W}*x

where

μW_i,=1I_j=1IW_i,j,  σ_W_i,=1I_i=1I(W_i,jμ_W_i,)2 \mu_{W\_{i,\cdot}} = \dfrac{1}{I}\sum\_{j=1}^{I}W\_{i, j},~~\sigma\_{W\_{i,\cdot}}=\sqrt{\dfrac{1}{I}\sum\_{i=1}^I(W\_{i,j} - \mu\_{W\_{i,\cdot}})^2}

Similar to Batch Normalization, WS controls the first and second moments of the weights of each output channel individually in convolutional layers. Note that many initialization methods also initialize the weights in some similar ways. Different from those methods, WS standardizes the weights in a differentiable way which aims to normalize gradients during back-propagation. Note that we do not have any affine transformation on W^\hat{W}. This is because we assume that normalization layers such as BN or GN will normalize this convolutional layer again.