Skip to yearly menu bar Skip to main content


Poster

LevAttention: Time, Space and Streaming Efficient Algorithm for Heavy Attentions

Ravindran Kannan · Chiranjib Bhattacharyya · Praneeth Kacham · David Woodruff

Hall 3 + Hall 2B #462
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: A central problem related to transformers can be stated as follows: given two n×d matrices Q and K, and a non-negative function f, define the matrix A as follows: (1) apply the function f to each entry of the n×n matrix QKT, and then (2) normalize each of the row sums of A to be equal to 1. The matrix A can be computed in O(n2d) time assuming f can be applied to a number in constant time, but the quadratic dependence on n is prohibitive in applications where it corresponds to long context lengths. For a large class of functions f, we show how to find all the "large attention scores", i.e., entries of A which are at least a positive value ε, in time with linear dependence on n (i.e., npoly(d/ε)) for a positive parameter ε>0. Our class of functions include all functions f of the form f(x)=|x|p, as explored recently in transformer models. Using recently developed tools from randomized numerical linear algebra, we prove that for any K, there is a "universal set" U[n] of size independent of n, such that for any Q and any row i, the large attention scores Ai,j in row i of A all have jU. We also find U in npoly(d/ε) time. Notably, we (1) make no assumptions on the data, (2) our workspace does not grow with n, and (3) our algorithms can be computed in streaming and parallel settings. We empirically show the benefits of our scheme for vision transformers, showing how to train new models that use our universal set while training as well, showing that our model is able to consistently select "important keys'" during training. We also provide theoretical motivation by formulating a planted model in which our efficient algorithms provably identify relevant keys for each query.

Live content is unavailable. Log in and register to view live content