Skip to yearly menu bar Skip to main content


On the Convergence of Certified Robust Training with Interval Bound Propagation

Yihan Wang · Zhouxing Shi · Quanquan Gu · Cho-Jui Hsieh

Keywords: [ convergence ] [ certified robustness ] [ adversarial robustness ]


Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical analysis on the convergence of IBP training. With an overparameterized assumption, we analyze the convergence of IBP robust training. We show that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if we have sufficiently small perturbation radius and large network width.

Chat is not available.