Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

Recursive Speculative Decoding: Accelerating LLM Inference via Sampling Without Replacement

Wonseok Jeon · Mukul Gagrani · Raghavv Goel · Junyoung Park · Mingu Lee · Christopher Lott


Abstract: Speculative decoding is an inference-acceleration method for large language models (LLMs) where a small language model generates a draft-token sequence which is further verified by the target LLM in parallel. Recent works have advanced this method by establishing a draft-token tree, achieving superior performance over a single-sequence speculative decoding. However, those works independently generate tokens at each level of the tree, not leveraging the tree's entire diversifiability. Besides, their empirical superiority has been shown for fixed length of sequences, implicitly granting more computational resource to LLM for the tree-based methods. None of the existing works has conducted empirical studies with fixed target computational budgets despite its importance to resource-bounded devices. We present Recursive Speculative Decoding (RSD), a novel tree-based method that samples draft tokens **without** replacement and maximizes the diversity of the tree. During RSD's drafting, the tree is built by either **Gumbel-Top-$k$ trick** that draws tokens without replacement in parallel or **Stochastic Beam Search** that samples **sequences** without replacement while early-truncating unlikely draft sequences and reducing the computational cost of LLM. We empirically evaluate RSD with Llama 2 and OPT models, showing that RSD outperforms the baseline methods, consistently for fixed draft sequence length and in most cases for fixed computational budgets at LLM.

Chat is not available.