Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Beyond calibration: estimating the grouping loss of modern neural networks

Alexandre Perez-Lebel · Marine Le Morvan · Gael Varoquaux

MH1-2-3-4 #64

Keywords: [ model evaluation ] [ decision making ] [ calibration ] [ grouping loss ] [ General Machine Learning ]


Abstract:

The ability to ensure that a classifier gives reliable confidence scores is essential to ensure informed decision-making. To this end, recent work has focused on miscalibration, i.e., the over or under confidence of model scores. Yet calibration is not enough: even a perfectly calibrated classifier with the best possible accuracy can have confidence scores that are far from the true posterior probabilities. This is due to the grouping loss, created by samples with the same confidence scores but different true posterior probabilities. Proper scoring rule theory shows that given the calibration loss, the missing piece to characterize individual errors is the grouping loss. While there are many estimators of the calibration loss, none exists for the grouping loss in standard settings. Here, we propose an estimator to approximate the grouping loss. We show that modern neural network architectures in vision and NLP exhibit grouping loss, notably in distribution shifts settings, which highlights the importance of pre-production validation.

Chat is not available.