Skip to yearly menu bar Skip to main content


Poster

Overparameterisation and worst-case generalisation: friend or foe?

Aditya Krishna Menon · Ankit Singh Rawat · Sanjiv Kumar

Virtual

Keywords: [ overparameterisation ] [ worst-case generalisation ]


Abstract:

Overparameterised neural networks have demonstrated the remarkable ability to perfectly fit training samples, while still generalising to unseen test samples. However, several recent works have revealed that such models' good average performance does not always translate to good worst-case performance: in particular, they may perform poorly on subgroups that are under-represented in the training set. In this paper, we show that in certain settings, overparameterised models' performance on under-represented subgroups may be improved via post-hoc processing. Specifically, such models' bias can be restricted to their classification layers, and manifest as structured prediction shifts for rare subgroups. We detail two post-hoc correction techniques to mitigate this bias, which operate purely on the outputs of standard model training. We empirically verify that with such post-hoc correction, overparameterisation can improve average and worst-case performance.

Chat is not available.