The Close Relationship Between Contrastive Learning and Meta-Learning

Renkun Ni · Manli Shu · Hossein Souri · Micah Goldblum · Tom Goldstein

Keywords: [ self-supervised learning ] [ contrastive learning ] [ meta-learning ]

[ Abstract ]
[ Visit Poster at Spot C1 in Virtual World ] [ OpenReview
Mon 25 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Contrastive learning has recently taken off as a paradigm for learning from unlabeled data. In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototypical Networks, achieve comparable performance to SimCLR when paired with this task distribution. This relationship can be leveraged by taking established techniques from meta-learning, such as task-based data augmentation, and showing that they benefit contrastive learning as well. These tricks also benefit state-of-the-art self-supervised learners without using negative pairs such as BYOL, which achieves 94.6\% accuracy on CIFAR-10 using a self-supervised ResNet-18 feature extractor trained with our meta-learning tricks. We conclude that existing advances designed for contrastive learning or meta-learning can be exploited to benefit the other, and it is better for contrastive learning researchers to take lessons from the meta-learning literature (and vice-versa) than to reinvent the wheel.

Chat is not available.