Skip to yearly menu bar Skip to main content


Poster

A rotation-equivariant convolutional neural network model of primary visual cortex

Alexander Ecker · Fabian H Sinz · Emmanouil Froudarakis · Paul Fahey · Santiago Cadena · Edgar Walker · Erick M Cobos · Jacob Reimer · Andreas Tolias · Matthias Bethge

Great Hall BC #34

Keywords: [ system identification ] [ v1 ] [ primary visual cortex ] [ rotation equivariance ] [ neuroscience ] [ equivariance ]


Abstract:

Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that convolutional neural networks (CNNs) can be trained to predict V1 activity more accurately, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this rotation-equivariant CNN to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1.

Live content is unavailable. Log in and register to view live content