Poster
ThinK: Thinner Key Cache by Query-Driven Pruning
Yuhui Xu · Zhanming Jie · Hanze Dong · Lei Wang · Xudong Lu · Aojun Zhou · Amrita Saha · Caiming Xiong · Doyen Sahoo
Hall 3 + Hall 2B #616
[
Abstract
]
Fri 25 Apr 7 p.m. PDT
— 9:30 p.m. PDT
Abstract:
Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications. However, their increased computational and memory demands present significant challenges, especially when handling long sequences.This paper focuses on the long-context scenario, addressing the inefficiencies in KV cache memory consumption during inference. Unlike existing approaches that optimize the memory based on the sequence length, we identify substantial redundancy in the channel dimension of the KV cache, as indicated by an uneven magnitude distribution and a low-rank structure in the attention weights.In response, we propose ThinK, a novel query-dependent KV cache pruning method designed to minimize attention weight loss while selectively pruning the least significant channels. Our approach not only maintains or enhances model accuracy but also achieves a reduction in KV cache memory costs by over 20\% compared with vanilla KV cache eviction and quantization methods. For instance, ThinK integrated with KIVI can achieve peak memory reduction while maintaining nearly the same quality, enabling a batch size increase from 4 (with KIVI alone) to 5 when using a single GPU. Extensive evaluations on the LLaMA and Mistral models across various long-sequence datasets verified the efficiency of ThinK. Our code has been made available at https://github.com/SalesforceAIResearch/ThinK.
Live content is unavailable. Log in and register to view live content