Oral
in
Workshop: 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models
Uncovering Mesa-Optimization Algorithms in Transformers
Johannes von Oswald · Eyvind Niklasson · Maximilian Schlegel · Alexander Meulemans · Seijin Kobayashi · Nicolas Zucchet · Nino Scherrer · Nolan Miller · Mark Sandler · Blaise Aguera y Arcas · Max Vladymyrov · Razvan Pascanu · Joao Sacramento
Transformers have become the dominant model in deep learning, but the reason for their superior performance is poorly understood. Here, we hypothesize that the strong performance of Transformers stems from an architectural bias towards mesa-optimization, a learned process running within the forward pass of a model consisting of the following two steps: (i) the construction of an internal learning objective, and (ii) its corresponding solution found through optimization. To test this hypothesis, we reverse-engineer a series of autoregressive Transformers trained on simple sequence modeling tasks, uncovering underlying gradient-based mesa-optimization algorithms driving the generation of predictions. Moreover, we show that the learned forward-pass optimization algorithm can be immediately repurposed to solve supervised few-shot tasks, suggesting that mesa-optimization might underlie the in-context learning capabilities of large language models. Building on our insights, we propose a novel self-attention layer, the mesa-layer, that explicitly and efficiently solves optimization problems specified in context.