AICurious Logo

What is: Graph Attention Network v2?

SourceHow Attentive are Graph Attention Networks?
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

The GATv2 operator from the “How Attentive are Graph Attention Networks?” paper, which fixes the static attention problem of the standard GAT layer: since the linear layers in the standard GAT are applied right after each other, the ranking of attended nodes is unconditioned on the query node. In contrast, in GATv2, every node can attend to any other node.

GATv2 scoring function:

ei,j=aLeakyReLU(W[hihj])e_{i,j} =\mathbf{a}^{\top}\mathrm{LeakyReLU}\left(\mathbf{W}[\mathbf{h}_i \, \Vert \,\mathbf{h}_j]\right)