Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Second Workshop on Representational Alignment (Re$^2$-Align)

What Representational Similarity Measures Imply about Decodable Information

Alex Williams


Abstract:

Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models (also called "decoders") that reconstruct a target vector (also called "linear probes") from neural responses. Popular neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, do not explicitly leverage this perspective and instead highlight geometric invariances to orthogonal or affine transformations when comparing representations. Here, we show that many of these measures can, in fact, be equivalently motivated from a decoding perspective. Specifically, measures like CKA and CCA quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.

Chat is not available.