Skip to yearly menu bar Skip to main content


Oral
in
Workshop: How Far Are We From AGI

The Pitfalls of Next-Token Prediction

Gregor Bachmann · Vaishnavh Nagarajan

Keywords: [ next-token prediction ] [ autoregressive ] [ language models ]


Abstract:

Can a mere next-token predictor faithfully model human intelligence? Our work is aimed at crystallizing this intuitive concern, which is currently fragmented in the literature. As a starting point, we advocate isolating the two phases of next-token prediction that are often conflated: autoregression during inference vs. teacher-forcing during training. We argue that the previously-identified problem of "exponential error accumulation" is a symptom of autoregressive inference. We then identify a more concerning problem: teacher-forcing can let the model fit the training data by cheating, causing total in-distribution failure during inference. We design a minimal planning task where empirically both the Transformer and the Mamba architecture fail in this manner --- remarkably, despite the task being easy to learn. Our work consolidates these and other essential arguments surrounding next-token prediction. We hope our effort can ground the next-token prediction debate and inspire further explorations beyond this paradigm.

Chat is not available.