AICurious Logo

What is: Strided Attention?

SourceGenerating Long Sequences with Sparse Transformers
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Strided Attention is a factorized attention pattern that has one head attend to the previous ll locations, and the other head attend to every llth location, where ll is the stride and chosen to be close to n\sqrt{n}. It was proposed as part of the Sparse Transformer architecture.

A self-attention layer maps a matrix of input embeddings XX to an output matrix and is parameterized by a connectivity pattern S=set(S_1,,S_n)S = \text{set}\left(S\_{1}, \dots, S\_{n}\right), where S_iS\_{i} denotes the set of indices of the input vectors to which the iith output vector attends. The output vector is a weighted sum of transformations of the input vectors:

Attend(X,S)=(a(x_i,S_i))_iset(1,,n) \text{Attend}\left(X, S\right) = \left(a\left(\mathbf{x}\_{i}, S\_{i}\right)\right)\_{i\in\text{set}\left(1,\dots,n\right)}

a(x_i,S_i)=softmax((W_qx_i)KT_S_id)V_S_ia\left(\mathbf{x}\_{i}, S\_{i}\right) = \text{softmax}\left(\frac{\left(W\_{q}\mathbf{x}\_{i}\right)K^{T}\_{S\_{i}}}{\sqrt{d}}\right)V\_{S\_{i}}

K_Si=(W_kx_j)_jS_iK\_{Si} = \left(W\_{k}\mathbf{x}\_{j}\right)\_{j\in{S\_{i}}}

V_Si=(W_vx_j)_jS_iV\_{Si} = \left(W\_{v}\mathbf{x}\_{j}\right)\_{j\in{S\_{i}}}

Here W_qW\_{q}, W_kW\_{k}, and W_vW\_{v} represent the weight matrices which transform a given x_ix\_{i} into a query, key, or value, and dd is the inner dimension of the queries and keys. The output at each position is a sum of the values weighted by the scaled dot-product similarity of the keys and queries.

Full self-attention for autoregressive models defines S_i=set(j:ji)S\_{i} = \text{set}\left(j : j \leq i\right), allowing every element to attend to all previous positions and its own position.

Factorized self-attention instead has pp separate attention heads, where the mmth head defines a subset of the indices A_i(m)set(j:ji)A\_{i}^{(m)} ⊂ \text{set}\left(j : j \leq i\right) and lets S_i=A_i(m)S\_{i} = A\_{i}^{(m)}. The goal with the Sparse Transformer was to find efficient choices for the subset AA.

Formally for Strided Attention, A(1)_i=A^{(1)}\_{i} = {t,t+1,...,it, t + 1, ..., i} for t=max(0,il)t = \max\left(0, i − l\right), and A(2)_i=A^{(2)}\_{i} = {j:(ij)modl=0j : (i − j) \mod l = 0}. The ii-th output vector of the attention head attends to all input vectors either from A(1)_iA^{(1)}\_{i} or A(2)_iA^{(2)}\_{i}. This pattern can be visualized in the figure to the right.

This formulation is convenient if the data naturally has a structure that aligns with the stride, like images or some types of music. For data without a periodic structure, like text, however, the authors find that the network can fail to properly route information with the strided pattern, as spatial coordinates for an element do not necessarily correlate with the positions where the element may be most relevant in the future.