Skip to yearly menu bar Skip to main content


Poster

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

Di He · Boqing Gong · Liwei Wang · Chen Dan · Cho-Jui Hsieh · Huan Zhang · Runtian Zhai · Pradeep K Ravikumar


Abstract:

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.

Chat is not available.