ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Black-box Attacks on Deep Neural Networks via Gradient Estimation

Arjun Nitin Bhagoji · Warren He · Bo Li · Dawn Song

East Meeting Level 8 + 15 #16

In this paper, we propose novel Gradient Estimation black-box attacks to generate adversarial examples with query access to the target model's class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial example from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We show that the proposed Gradient Estimation attacks outperform all other black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving attack success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai.

Live content is unavailable. Log in and register to view live content