Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

Inferring DNN-brain alignment using representational similarity analyses can be problematic

Marin Dujmovic · Jeffrey Bowers · Federico Adolfi · Gaurav Malhotra

Keywords: [ Representational Similarity Analysis; RSA; Deep Neural Networks; DNN; ]


Abstract:

Representational Similarity Analysis (RSA) has been used to compare representations across individuals, species, and computational models. Here we focus on comparisons made between the activity of hidden units in Deep Neural Networks (DNNs) trained to classify objects and neural activations in visual cortex. In this context, DNNs that obtain high RSA scores are often described as good models ofbiological vision, a conclusion at odds with the failure of DNNs to account for the results of most vision experiments reported in psychology. How can these two sets of findings be reconciled? Here, we demonstrate that high RSA scores can easily be obtained between two systems that classify objects in qualitatively different ways when second-order confounds are present in image datasets. We argue that these confounds likely exist in the datasets used in current and past research. If RSA is going to be used as a tool to study DNN-human alignment, it will be necessary to experimentally manipulate images in ways that remove these confounds. We hope our simulations motivate researchers to reexamine the conclusions they draw from past research and focus more on RSA studies that manipulate images in theoretically motivated ways.

Chat is not available.