Skip to yearly menu bar Skip to main content


Poster
in
Workshop: SCOPE: SCALABLE OPTIMIZATION FOR EFFICIENT AND ADPATIVE FOUNDATION MODELS

Acceleration Multiple Heads Decoding for LLM via Dynamic Tree Attention

Zhendong Zhang

Keywords: [ tree attention ] [ MEDUSA ] [ dynamic construction ] [ multiple heads decoding ]


Abstract:

Multiple heads decoding accelerates the inference of Large Language Models (LLMs) by predicting next several tokens simultaneously.It generates and verifies multiple candidate sequences in parallel via tree attention with a fixed structure. In this paper, we replace the fixed tree attention with dynamic tree attention on multiple head decoding, specifically in the context of MEDUSA. We propose a simple and low complexity strategy to generate candidates and construct the dynamic tree structure. Preliminary experiments show that the proposed method improves the decoding efficiency of multiple head decoding for LLMs while maintaining the generation quality. This result demonstrates the potential for improvement of multiple head decoding in candidate generation.

Chat is not available.