Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

On the universality of neural encodings in CNNs

Florentin Guth · Brice Ménard

Keywords: [ transfer learning ] [ universality ] [ weight covariances ] [ representation alignment ]


Abstract:

We explore the universality of neural encodings in convolutional neural networks (CNNs) trained on image classification tasks. We develop a procedure to directly compare the learned weights rather than their representations. It is based on a factorization of spatial and channel dimensions and measures the similarity of aligned weight covariances. We show that, for a range of layers of VGG-type networks, the learned eigenvectors appear to be universal across different natural image datasets. Our results suggest the existence of a universal neural encoding for natural images. They explain, at a more fundamental level, the success of transfer learning. Our approach shows that, instead of aiming at maximizing performance, one can also attempt to maximize the universality of the learned encoding towards a foundation model.

Chat is not available.