Skip to yearly menu bar Skip to main content


Poster

SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration

Jintao Zhang · Jia wei · Pengle Zhang · Jun Zhu · Jianfei Chen

Hall 3 + Hall 2B #143
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer.In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1x and 2.7x, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models—including those for large language processing, image generation, and video generation. The code is available at https://github.com/thu-ml/SageAttention.

Live content is unavailable. Log in and register to view live content