Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

BrainBERT: Self-supervised representation learning for intracranial recordings

Christopher Wang · Vighnesh Subramaniam · Adam Yaari · Gabriel Kreiman · Boris Katz · Ignacio Cases · Andrei Barbu

Keywords: [ neuroscience ] [ language models ] [ transformer ] [ self-supervision ] [ decoding ] [ Neuroscience and Cognitive Science ]


Abstract:

We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience. Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, i.e., decoding neural data, with higher accuracy and with much less data by being pretrained in an unsupervised manner on a large corpus of unannotated neural recordings. Our approach generalizes to new subjects with electrodes in new positions and to unrelated tasks showing that the representations robustly disentangle the neural signal. Just like in NLP where one can study language by investigating what a language model learns, this approach opens the door to investigating the brain by what a model of the brain learns. As a first step along this path, we demonstrate a new analysis of the intrinsic dimensionality of the computations in different areas of the brain. To construct these representations, we combine a technique for producing super-resolution spectrograms of neural data with an approach designed for generating contextual representations of audio by masking. In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.

Chat is not available.