Skip to yearly menu bar Skip to main content


Poster

Faster Cascades via Speculative Decoding

Harikrishna Narasimhan · Wittawat Jitkrittum · Ankit Singh Rawat · Seungyeon Kim · Neha Gupta · Aditya Krishna Menon · Sanjiv Kumar

Hall 3 + Hall 2B #298
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 6B
Sat 26 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

Cascades and speculative decoding are two common approaches to improving language models' inference efficiency. Both approaches interleave two models, but via fundamentally distinct mechanisms: deferral rule that invokes the larger model only for “hard” inputs, while speculative decoding uses speculative execution to primarily invoke the larger model in parallel scoring mode. These mechanisms offer different benefits: empirically, cascades offer compelling cost-quality trade-offs, often even outperforming the large model; speculative cascades offer impressive speed-ups, while guaranteeing quality-neutrality. In this paper, we leverage the best of both these approaches by designing new speculative cascading techniques that implement their deferral rule through speculative execution. We characterize the optimal deferral rule for our speculative cascades, and employ a plug-in approximation to the optimal rule. Experiments with Gemma and T5 models on a range of language benchmarks show that our approach yields better cost quality trade-offs than cascading and speculative decoding baselines.

Live content is unavailable. Log in and register to view live content