Skip to yearly menu bar Skip to main content


Understanding In-Context Learning from Repetitions

Jianhao (Elliott) Yan · Jin Xu · Chiyu Song · Chenming Wu · Yafu Li · Yue Zhang

Halle B #267
[ ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT


This paper explores the elusive mechanism underpinning in-context learning in Large Language Models (LLMs). Our work provides a novel perspective by examining in-context learning via the lens of surface repetitions. We quantitatively investigate the role of surface features in text generation, and empirically establish the existence of \emph{token co-occurrence reinforcement}, a principle that strengthens the relationship between two tokens based on their contextual co-occurrences. Furthermore, we find similar reinforcements lie behind the pretraining corpus, revealing the existence is due to LLMs' efforts to maximize the likelihood. By investigating the dual impacts of these features, our research illuminates the internal workings of in-context learning and expounds on the reasons for its failures. This paper provides an essential contribution to the understanding of in-context learning and its potential limitations, providing a fresh perspective on this exciting capability.

Chat is not available.