Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Omnigrok: Grokking Beyond Algorithmic Data

Ziming Liu · Eric Michaud · Max Tegmark

MH1-2-3-4 #85

Keywords: [ Deep Learning and representational learning ] [ initialization ] [ representation learning ] [ neural dynamics ] [ loss landscape ] [ grokking ]


Abstract:

Grokking, the unusual phenomenon for algorithmic datasets where generalization happens long after overfitting the training data, has remained elusive. We aim to understand grokking by analyzing the loss landscapes of neural networks, identifying the mismatch between training and test losses as the cause for grokking. We refer to this as the "LU mechanism" because training and test losses (against model weight norm) typically resemble "L" and "U", respectively. This simple mechanism can nicely explain many aspects of grokking: data size dependence, weight decay dependence, the emergence of representations, etc. Guided by the intuitive picture, we are able to induce grokking on tasks involving images, language and molecules, although the grokking signals are sometimes less dramatic. We attribute the dramatic nature of grokking for algorithmic datasets to representation learning.

Chat is not available.