Poster
Faster Cascades via Speculative Decoding
Harikrishna Narasimhan · Wittawat Jitkrittum · Ankit Singh Rawat · Seungyeon Kim · Neha Gupta · Aditya Krishna Menon · Sanjiv Kumar
Hall 3 + Hall 2B #298
Sat 26 Apr 12:30 a.m. PDT — 2 a.m. PDT
Cascades and speculative decoding are two common approaches to improving language models' inference efficiency. Both approaches interleave two models, but via fundamentally distinct mechanisms: deferral rule that invokes the larger model only for “hard” inputs, while speculative decoding uses speculative execution to primarily invoke the larger model in parallel scoring mode. These mechanisms offer different benefits: empirically, cascades offer compelling cost-quality trade-offs, often even outperforming the large model; speculative cascades offer impressive speed-ups, while guaranteeing quality-neutrality. In this paper, we leverage the best of both these approaches by designing new speculative cascading techniques that implement their deferral rule through speculative execution. We characterize the optimal deferral rule for our speculative cascades, and employ a plug-in approximation to the optimal rule. Experiments with Gemma and T5 models on a range of language benchmarks show that our approach yields better cost quality trade-offs than cascading and speculative decoding baselines.
Live content is unavailable. Log in and register to view live content