Skip to yearly menu bar Skip to main content


Poster

Memory Architectures in Recurrent Neural Network Language Models

Dani Yogatama · yishu miao · Gábor Melis · Wang Ling · Adhiguna Kuncoro · Chris Dyer · Phil Blunsom

East Meeting level; 1,2,3 #27

Abstract:

We compare and analyze sequential, random access, and stack memory architectures for recurrent neural network language models. Our experiments on the Penn Treebank and Wikitext-2 datasets show that stack-based memory architectures consistently achieve the best performance in terms of held out perplexity. We also propose a generalization to existing continuous stack models (Joulin & Mikolov,2015; Grefenstette et al., 2015) to allow a variable number of pop operations more naturally that further improves performance. We further evaluate these language models in terms of their ability to capture non-local syntactic dependencies on a subject-verb agreement dataset (Linzen et al., 2016) and establish new state of the art results using memory augmented language models. Our results demonstrate the value of stack-structured memory for explaining the distribution of words in natural language, in line with linguistic theories claiming a context-free backbone for natural language.

Live content is unavailable. Log in and register to view live content