Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Debugging Machine Learning Models

Similarity of Neural Network Representations Revisited

Simon Kornblith

[ ]
2019 Contributed Talk
in
Workshop: Debugging Machine Learning Models

Abstract:

Recent work has sought to understand the behavior of neural networks by comparing representations between layers and between different trained models. We introduce a similarity index that measures the relationship between representational similarity matrices. We show that this similarity index is equivalent to centered kernel alignment (CKA) and analyze its relationship to canonical correlation analysis. Unlike other methods, CKA can reliably identify correspondences between representations of layers in networks trained from different initializations. Moreover, CKA can reveal network pathology that is not evident from test accuracy alone.

Chat is not available.