Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Towards Robustness Certification Against Universal Perturbations

Yi Zeng · Zhouxing Shi · Ming Jin · Feiyang Kang · Lingjuan Lyu · Cho-Jui Hsieh · Ruoxi Jia

Keywords: [ Universal Perturbation ] [ backdoor attack ] [ certified robustness ] [ adversarial attack ] [ poisoning attack ] [ Social Aspects of Machine Learning ]


Abstract:

In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.

Chat is not available.