Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Representational Alignment (Re$^2$-Align)

Brain-like slot representation for sequence working memory in recurrent neural networks

Mingye Wang · Stefano Fusi · Kimberly Stachenfeld


Abstract:

Sequence working memory (SWM) is the ability to remember a sequence of items and recall them in order. Recent neural recordings in the prefrontal cortex during SWM found “slot”-like representations, where each item in the sequence is stored in a separate low-dimensional subspace of population activity. However, it is unclear what circuit structure could give rise to such representations, and if such slot-like representation naturally arises from the constraints of the SWM task. Here, we trained recurrent neural networks (RNNs) on a SWM task using standard architectures and training procedures. Two types of networks emerged, relying on two types of representations: (1) is the brain-like slot representation, which enables generalization to novel sequence lengths; (2) stores items in time-varying subspaces, and fails on novel sequence lengths. For (1), we delve into the network weights and identify a simple, interpretable circuit with modular components that implements slots. Our work bridges biological and artificial networks by demonstrating 1) more brain-like representations in artificial networks (which can emerge from standard architecture and training) lead to better generalization performance, 2) how one can extract interpretable circuit structures from these brain-like artificial networks to serve as hypotheses for the brain.

Chat is not available.