Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning

Hao He · Kaiwen Zha · Dina Katabi

Keywords: [ Data Poisoning ] [ contrastive learning ] [ General Machine Learning ]


Abstract:

Indiscriminate data poisoning attacks are quite effective against supervised learning. However, not much is known about their impact on unsupervised contrastive learning (CL). This paper is the first to consider indiscriminate poisoning attacks of contrastive learning. We propose Contrastive Poisoning (CP), the first effective such attack on CL. We empirically show that Contrastive Poisoning, not only drastically reduces the performance of CL algorithms, but also attacks supervised learning models, making it the most generalizable indiscriminate poisoning attack. We also show that CL algorithms with a momentum encoder are more robust to indiscriminate poisoning, and propose a new countermeasure based on matrix completion. Code is available at: https://github.com/kaiwenzha/contrastive-poisoning.

Chat is not available.