Poster
in
Workshop: Navigating and Addressing Data Problems for Foundation Models (DPFM)
How to Craft Backdoors with Unlabeled Data Alone?
Yifei Wang · Wenhan Ma · Stefanie Jegelka · Yisen Wang
Keywords: [ unsupervised learning ] [ security ] [ backdoor attack ] [ self-supervised learning ] [ poisoning ] [ unlabeled data ]
Relying only on unlabeled data, Self-supervised learning (SSL) can learn rich features in an economical and scalable way. As the drive-horse for building foundation models, SSL has received a lot of attention recently with wide applications, which also raises security concerns where backdoor attack is a major type of threat: if the released dataset is maliciously poisoned, backdoored SSL models can behave badly when triggers are injected to test samples. The goal of this work is to investigate this potential risk. We notice that existing backdoors all require a considerable amount of labeled data that may not be available for SSL. To circumvent this limitation, we explore a more restrictive setting called no-label backdoors, where we only have access to the unlabeled data alone, where the key challenge is how to select the proper poison set without using label information. We propose two strategies for poison selection: clustering-based selection using pseudolabels, and contrastive selection derived from the mutual information principle. Experiments on CIFAR-10 and ImageNet-100 show that both no-label backdoors are effective on many SSL methods and outperform random poisoning by a large margin.