Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

Rundong Luo · Yifei Wang · Yisen Wang

MH1-2-3-4 #165

Keywords: [ contrastive learning ] [ adversarial contrastive learning ] [ adversarial training ] [ Unsupervised and Self-supervised learning ]


Abstract:

Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84% under Auto-Attack on the CIFAR-10 dataset, and can even outperform vanilla supervised adversarial training for the first time. Our code is available at \url{https://github.com/PKU-ML/DYNACL}.

Chat is not available.