Understanding Intrinsic Robustness Using Label Uncertainty

Xiao Zhang · David Evans


Keywords: [ concentration of measure ]

[ Abstract ]
[ Visit Poster at Spot B2 in Virtual World ] [ Slides [ OpenReview
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


A fundamental question in adversarial machine learning is whether a robust classifier exists for a given task. A line of research has made some progress towards this goal by studying the concentration of measure, but we argue standard concentration fails to fully characterize the intrinsic robustness of a classification problem since it ignores data labels which are essential to any classification task. Building on a novel definition of label uncertainty, we empirically demonstrate that error regions induced by state-of-the-art models tend to have much higher label uncertainty than randomly-selected subsets. This observation motivates us to adapt a concentration estimation algorithm to account for label uncertainty, resulting in more accurate intrinsic robustness measures for benchmark image classification problems.

Chat is not available.