Skip to yearly menu bar Skip to main content


Spotlight

GNN-LM: Language Modeling based on Global Contexts via GNN

Yuxian Meng · Shi Zong · Xiaoya Li · Xiaofei Sun · Tianwei Zhang · Fei Wu · Jiwei Li

Abstract:

Inspired by the notion that {\it to copy is easier than to memorize}, in this work, we introduce GNN-LM, which extends vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus. We build a directed heterogeneous graph between an input context and its semantically related neighbors selected from the training corpus, where nodes are tokens in the input context and retrieved neighbor contexts, and edges represent connections between nodes. Graph neural networks (GNNs) are constructed upon the graph to aggregate information from similar contexts to decode the token. This learning paradigm provides direct access to the reference contexts and helps improve a model's generalization ability. We conduct comprehensive experiments to validate the effectiveness of the GNN-LM: GNN-LM achieves a new state-of-the-art perplexity of 14.8 on WikiText-103 (a 3.9 point improvement over its counterpart of the vanilla LM model), and shows substantial improvement on One Billion Word and Enwiki8 datasets against strong baselines. In-depth ablation studies are performed to understand the mechanics of GNN-LM. The code can be found at \url{https://github.com/ShannonAI/GNN-LM}.

Chat is not available.