Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Setting up ML Evaluation Standards to Accelerate Progress

Increasing Confidence in Adversarial Robustness Evaluations

Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini


Abstract:

Hundreds of defenses have been proposed in the past years to make deep neural networks robust against minimal (adversarial) input perturbations. However, only a handful of these could hold up their claims because correctly evaluating robustness is extremely challenging: Weak attacks often fail to find adversarial examples even if they unknowingly exist, thereby making a vulnerable network look robust.In this paper, we propose a test to identify weak attacks. Our test introduces a small and simple modification into a neural network that guarantees the existence of an adversarial example for every sample. Consequentially, any correct attack must succeed in attacking this modified network.For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test while stronger attacks that break these defenses pass it.We hope that attack unit tests such as ours will be a major component in future robustness evaluations and increase confidence in an empirical field that today is riddled with skepticism and disbelief.

Chat is not available.