Speculative Speculative Decoding
Abstract
Autoregressive decoding is bottlenecked by its sequential nature. Speculative decoding has become a standard way to accelerate inference by using a fast draft model to predict upcoming tokens from a slower target model, and then verifying them in parallel with a single target model forward pass. However, speculative decoding itself relies on a sequential dependence between speculation and verification. We introduce speculative speculative decoding (SSD) to parallelize these operations. While a verification is ongoing, the draft model predicts likely verification outcomes and prepares speculations pre-emptively for them. If the actual verification outcome is then in the predicted set, a speculation can be returned immediately, thereby eliminating all speculation overhead. We identify three key challenges presented by speculative speculative decoding, and put forth principled methods to solve each after theoretical analysis. The result is Saguaro, an optimized SSD algorithm which is up to twice as fast as optimized speculative decoding baselines and up to 5× faster than autoregressive decoding with open source inference engines. Saguaro can be combined with existing methods like EAGLE and token tree speculation for further gains, and permits scaling draft compute to better predict verification outcomes, introducing new tradeoffs between compute and latency.