Poster
σ-zero: Gradient-based Optimization of ℓ0-norm Adversarial Examples
Antonio Emanuele Cinà · Francesco Villani · Maura Pintor · Lea Schönherr · Battista Biggio · Marcello Pelillo
Hall 3 + Hall 2B #296
[
Abstract
]
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging.While most attacks consider ℓ2- and ℓ∞-norm constraints to craft input perturbations, only a few investigate sparse ℓ1- and ℓ0-norm attacks.In particular, ℓ0-norm attacks remain the least studied due to the inherent complexity of optimizing over a non-convex and non-differentiable constraint.However, evaluating adversarial robustness under these attacks could reveal weaknesses otherwise left untested with more conventional ℓ2- and ℓ∞-norm attacks.In this work, we propose a novel ℓ0-norm attack, called σ-zero, which leverages a differentiable approximation of the ℓ0 norm to facilitate gradient-based optimization, and an adaptive projection operator to dynamically adjust the trade-off between loss minimization and perturbation sparsity.Extensive evaluations using MNIST, CIFAR10, and ImageNet datasets, involving robust and non-robust models, show that σ-zero finds minimum ℓ0-norm adversarial examples without requiring any time-consuming hyperparameter tuning, and that it outperforms all competing sparse attacks in terms of success rate, perturbation size, and efficiency.
Live content is unavailable. Log in and register to view live content