AICurious Logo

What is: efficient channel attention?

SourceECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

An ECA block has similar formulation to an SE block including a squeeze module for aggregating global spatial information and an efficient excitation module for modeling cross-channel interaction. Instead of indirect correspondence, an ECA block only considers direct interaction between each channel and its k-nearest neighbors to control model complexity. Overall, the formulation of an ECA block is: \begin{align} s = F_\text{eca}(X, \theta) & = \sigma (\text{Conv1D}(\text{GAP}(X))) \end{align} \begin{align} Y & = s X \end{align} where Conv1D()\text{Conv1D}(\cdot) denotes 1D convolution with a kernel of shape kk across the channel domain, to model local cross-channel interaction. The parameter kk decides the coverage of interaction, and in ECA the kernel size kk is adaptively determined from the channel dimensionality CC instead of by manual tuning, using cross-validation: \begin{equation} k = \psi(C) = \left | \frac{\log_2(C)}{\gamma}+\frac{b}{\gamma}\right |_\text{odd} \end{equation}

where γ\gamma and bb are hyperparameters. xodd|x|_\text{odd} indicates the nearest odd function of xx.

Compared to SENet, ECANet has an improved excitation module, and provides an efficient and effective block which can readily be incorporated into various CNNs.