Skip to yearly menu bar Skip to main content


Poster

Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment

Gregor Bachmann · Sotiris Anagnostidis · Albert Pumarola · Markos Georgopoulos · Artsiom Sanakoyeu · Yuming Du · Edgar Schoenfeld · Ali Thabet · Jonas Kohler

Hall 3 + Hall 2B #469
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 6B
Sat 26 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract: The performance of large language models (LLMs) is closely linked to their underlying size, leading to ever-growing networks and hence slower inference. Speculative decoding has been proposed as a technique to accelerate autoregressive generation, leveraging a fast draft model to propose candidate tokens, which are then verified in parallel based on their likelihood under the target model. While this approach guarantees to reproduce the target output, it incurs a substantial penalty: many high-quality draft tokens are rejected, even when they represent objectively valid continuations. Indeed, we show that even powerful draft models such as GPT-4o, as well as human text cannot achieve high acceptance rates under the standard verification scheme. This severely limits the speedup potential of current speculative decoding methods, as an early rejection becomes overwhelmingly likely when solely relying on alignment of draft and target.We thus ask the following question: Can we adapt verification to recognize correct, but non-aligned replies? To this end, we draw inspiration from the LLM-as-a-judge framework, which demonstrated that LLMs are able to rate answers in a versatile way. We carefully design a dataset coined TokenCourt to elicit the same capability in the target model by training a compact module on top of the embeddings to produce judgements" of the current continuation. We showcase our strategy on the Llama-3.1 family, where our 8B/405B-Judge achieves a speedup of 9×9× over Llama-405B, while maintaining its quality on a large range of benchmarks. These benefits remain present even in optimized inference frameworks, where our method reaches up to 141141 tokens/s for 8B/70B-Judge and 129129 tokens/s for 8B/405B on 22 and 88 H100s respectively.

Live content is unavailable. Log in and register to view live content