Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

On Achieving Optimal Adversarial Test Error

Justin D. Li · Matus Telgarsky

MH1-2-3-4 #159

Keywords: [ Theory ]


Abstract:

We first elucidate various fundamental properties of optimal adversarial predictors: the structure of optimal adversarial convex predictors in terms of optimal adversarial zero-one predictors, bounds relating the adversarial convex loss to the adversarial zero-one loss, and the fact that continuous predictors can get arbitrarily close to the optimal adversarial error for both convex and zero-one losses. Applying these results along with new Rademacher complexity bounds for adversarial training near initialization, we prove that for general data distributions and perturbation sets, adversarial training on shallow networks with early stopping and an idealized optimal adversary is able to achieve optimal adversarial test error. By contrast, prior theoretical work either considered specialized data distributions or only provided training error guarantees.

Chat is not available.