Skip to yearly menu bar Skip to main content

Spotlight Poster

Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation

Kimia Hamidieh · Haoran Zhang · Swami Sankaranarayanan · Marzyeh Ghassemi

Halle B #286
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT


Supervised learning methods have been found to exhibit inductive biases favoring simpler features. When such features are spuriously correlated with the label, this can result in suboptimal performance on minority subgroups. Despite the growing popularity of methods which learn from unlabeled data, the extent to which these representations rely on spurious features for prediction is unclear. In this work, we explore the impact of spurious features on Self-Supervised Learning (SSL) for visual representation learning. We first empirically show that commonly used augmentations in SSL can cause undesired invariances in the image space, and illustrate this with a simple example. We further show that classical approaches in combating spurious correlations, such as dataset re-sampling during SSL, do not consistently lead to invariant representations. Motivated by these findings, we propose LateTVG to remove spurious information from these representations during pre-training, by regularizing later layers of the encoder via pruning. We find that our method produces representations which outperform the baselines on several benchmarks, without the need for group or label information during SSL.

Chat is not available.