Skip to yearly menu bar Skip to main content


Poster

Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model

Zhiwei Xu · Zhiyu Ni · Yixin Wang · Wei Hu

Hall 3 + Hall 2B #335
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

''Grokking'' is a phenomenon where a neural network first memorizes training data and generalizes poorly, but then suddenly transitions to near-perfect generalization after prolonged training. While intriguing, this delayed generalization phenomenon compromises predictability and efficiency. Ideally, models should generalize directly without delay. To this end, this paper proposes GrokTransfer, a simple and principled method for accelerating grokking in training neural networks, based on the key observation that data embedding plays acrucial role in determining whether generalization is delayed. GrokTransfer first trains a smaller, weaker model to reach a nontrivial (but far from optimal) test performance. Then, the learned input embedding from this weaker model is extracted and used to initialize the embedding in the target, stronger model. We rigorously prove that, on a synthetic XOR task where delayed generalization alwaysoccurs in normal training, GrokTransfer enables the target model to generalize directly without delay. Moreover, we demonstrate that, across empirical studies of different tasks, GrokTransfer effectively reshapes the training dynamics and eliminates delayed generalization, for both fully-connected neural networks and Transformers.

Live content is unavailable. Log in and register to view live content