Skip to yearly menu bar Skip to main content


Spotlight Poster

Noisy Interpolation Learning with Shallow Univariate ReLU Networks

Nirmit Joshi · Gal Vardi · Nathan Srebro

Halle B #184

Abstract: Understanding how overparameterized neural networks generalize despite perfect interpolation of noisy training data is a fundamental question. Mallinar et. al. (2022) noted that neural networks seem to often exhibit tempered overfitting'', wherein the population risk does not converge to the Bayes optimal error, but neither does it approach infinity, yielding non-trivial generalization. However, this has not been studied rigorously. We provide the first rigorous analysis of the overfiting behaviour of regression with minimum norm (2 of weights), focusing on univariate two-layer ReLU networks. We show overfitting is tempered (with high probability) when measured with respect to the L1 loss, but also show that the situation is more complex than suggested by Mallinar et. al., and overfitting is catastrophic with respect to the L2 loss, or when taking an expectation over the training set.

Chat is not available.