Skip to yearly menu bar Skip to main content


Poster

Rethinking Attention with Performers

Krzysztof Choromanski · Valerii Likhosherstov · David Dohan · Xingyou Song · Georgiana-Andreea Gane · Tamas Sarlos · Peter Hawkins · Jared Q Davis · Afroz Mohiuddin · Lukasz Kaiser · David Belanger · Lucy J Colwell · Adrian Weller

Keywords: [ attention ] [ transformer ] [ sparsity ] [ softmax ] [ linear ] [ Approximation ] [ BERT ] [ performer ] [ bidirectional ] [ unidirectional ] [ orthogonal ] [ random ] [ features ] [ FAVOR ] [ kernel ] [ generalized ] [ reformer ] [ linformer ] [ protein ] [ trembl ] [ uniprot ]


Abstract:

We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can also be used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.

Chat is not available.