Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Why adversarial training can hurt robust accuracy

Jacob Clarysse · Julia Hörrmann · Fanny Yang

MH1-2-3-4 #144

Keywords: [ learning theory ] [ Robust generalisation ] [ adversarial training ] [ Theory ]


Abstract:

Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite can be true for a natural class of perceptible perturbations --- even though adversarial training helps when enough data is available, it may in fact hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Using intuitive insights from the proof, we could surprisingly find perturbations on standard image datasets for which this behavior persists. Specifically, it occurs for perceptible attacks that effectively reduce class information such as object occlusions or corruptions.

Chat is not available.