Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Self-Improving Foundation Models Without Human Supervision

Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges

Nayoung Lee · Ziyang Cai · Avi Schwarzschild · Kangwook Lee · Dimitris Papailiopoulos

Keywords: [ length generalization ] [ self-training ] [ self-improvement ]


Abstract:

Large language models often struggle with length generalization and solving complex problem instances beyond their training distribution. We present a self-improvement approach where models iteratively generate and learn from their own solutions, progressively tackling harder problems while maintaining a standard transformer architecture. Across diverse tasks including arithmetic, string manipulation, and maze solving, self-improving enables models to solve problems far beyond their initial training distribution—for instance, generalizing from 10-digit to 100-digit addition without apparent saturation. We observe that in some cases filtering for correct self-generated examples leads to exponential improvements in out-of-distribution performance across training rounds. Additionally, starting from pretrained models significantly accelerates this self-improvement process for several tasks. Our results demonstrate how controlled weak-to-strong curricula can systematically teach a model logical extrapolation without any changes to the positional embeddings, or the model architecture.

Chat is not available.