Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models

Kaiyue Wen · Jiaye Teng · Jingzhao Zhang

MH1-2-3-4 #161

Keywords: [ mild overparameterization ] [ implicit bias ] [ benign overfitting ] [ generalization ] [ Theory ]


Abstract:

Studies on benign overfitting provide insights for the success of overparameterized deep learning models. In this work, we examine whether overfitting is truly benign in real-world classification tasks. We start with the observation that a ResNet model overfits benignly on Cifar10 but not benignly on ImageNet. To understand why benign overfitting fails in the ImageNet experiment, we theoretically analyze benign overfitting under a more restrictive setup where the number of parameters is not significantly larger than the number of data points. Under this mild overparameterization setup, our analysis identifies a phase change: unlike in the previous heavy overparameterization settings, benign overfitting can now fail in the presence of label noise. Our analysis explains our empirical observations, and is validated by a set of control experiments with ResNets. Our work highlights the importance of understanding implicit bias in underfitting regimes as a future direction.

Chat is not available.