Skip to yearly menu bar Skip to main content


Poster

ARMOURED: Adversarially Robust MOdels using Unlabeled data by REgularizing Diversity

Kangkang Lu · Cuong Nguyen · Xun Xu · Kiran Chari · Yu Jing Goh · Chuan-Sheng Foo

Keywords: [ adversarial robustness ] [ semi-supervised learning ] [ Multi-View Learning ] [ Diversity Regularization ] [ Entropy Maximization ]


Abstract:

Adversarial attacks pose a major challenge for modern deep neural networks. Recent advancements show that adversarially robust generalization requires a large amount of labeled data for training. If annotation becomes a burden, can unlabeled data help bridge the gap? In this paper, we propose ARMOURED, an adversarially robust training method based on semi-supervised learning that consists of two components. The first component applies multi-view learning to simultaneously optimize multiple independent networks and utilizes unlabeled data to enforce labeling consistency. The second component reduces adversarial transferability among the networks via diversity regularizers inspired by determinantal point processes and entropy maximization. Experimental results show that under small perturbation budgets, ARMOURED is robust against strong adaptive adversaries. Notably, ARMOURED does not rely on generating adversarial samples during training. When used in combination with adversarial training, ARMOURED yields competitive performance with the state-of-the-art adversarially-robust benchmarks on SVHN and outperforms them on CIFAR-10, while offering higher clean accuracy.

Chat is not available.