AICurious Logo

What is: Dynamic Convolution?

SourceDynamic Convolution: Attention over Convolution Kernels
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

The extremely low computational cost of lightweight CNNs constrains the depth and width of the networks, further decreasing their representational power. To address the above problem, Chen et al. proposed dynamic convolution, a novel operator design that increases representational power with negligible additional computational cost and does not change the width or depth of the network in parallel with CondConv.

Dynamic convolution uses KK parallel convolution kernels of the same size and input/output dimensions instead of one kernel per layer. Like SE blocks, it adopts a squeeze-and-excitation mechanism to generate the attention weights for the different convolution kernels. These kernels are then aggregated dynamically by weighted summation and applied to the input feature map XX: \begin{align} s & = \text{softmax} (W_{2} \delta (W_{1}\text{GAP}(X))) \end{align} \begin{align} \text{DyConv} &= \sum_{i=1}^{K} s_k \text{Conv}_k \end{align} \begin{align} Y &= \text{DyConv}(X) \end{align} Here the convolutions are combined by summation of weights and biases of convolutional kernels.

Compared to applying convolution to the feature map, the computational cost of squeeze-and-excitation and weighted summation is extremely low. Dynamic convolution thus provides an efficient operation to improve representational power and can be easily used as a replacement for any convolution.