Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

The hidden uniform cluster prior in self-supervised learning

Mahmoud Assran · Randall Balestriero · Quentin Duval · Florian Bordes · Ishan Misra · Piotr Bojanowski · Pascal Vincent · Michael Rabbat · Nicolas Ballas

MH1-2-3-4 #166

Keywords: [ transfer learning ] [ unsupervised learning ] [ representation learning ] [ self-supervised learning ] [ Unsupervised and Self-supervised learning ]


A successful paradigm in representation learning is to perform self-supervised pretraining using tasks based on mini-batch statistics; (e.g., SimCLR, VICReg, SwAV, MSN). We show that in the formulation of all these methods is an overlooked prior to learn features that enable uniform clustering of the data. While this prior has led to remarkably semantic representations when pretraining on class-balanced data, such as ImageNet, we demonstrate that it can hamper performance when pretraining on class-imbalanced data. By moving away from conventional uniformity priors and instead preferring power-law distributed feature clusters, we show that one can improve the quality of the learned representations on real-world class-imbalanced datasets. To demonstrate this, we develop an extension of the Masked Siamese Networks (MSN) method to support the use of arbitrary features priors.

Chat is not available.