Poster
Boosting Robustness Certification of Neural Networks
Gagandeep Singh · Timon Gehr · Markus PĆ¼schel · Martin Vechev
Great Hall BC #26
Keywords: [ adversarial attacks ] [ robustness certification ] [ abstract interpretation ] [ milp solvers ] [ verification of neural networks ]
We present a novel approach for the certification of neural networks against adversarial perturbations which combines scalable overapproximation methods with precise (mixed integer) linear programming. This results in significantly better precision than state-of-the-art verifiers on challenging feedforward and convolutional neural networks with piecewise linear activation functions.
Live content is unavailable. Log in and register to view live content