Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

Beyond Confidence: Reliable Models Should Also Quantify Atypicality

Mert Yuksekgonul · Linjun Zhang · James Y Zou · Carlos Guestrin


Abstract:

While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand and use the model's uncertainty reliably. For instance, the model may have a low confidence prediction for a sample that is far from the training distribution or is inherently ambiguous. In this work, we investigate the relationship between how atypical~(or rare) a sample is and the reliability of a model's confidence for this sample. First, we show that atypicality can predict miscalibration. In particular, we empirically show that predictions for atypical examples are more miscalibrated and overconfident, and support our findings with theoretical insights. Using these insights, we show how being atypicality-aware improves uncertainty quantification. Finally, we give a framework to improve decision-making and show that the atypicality framework improves selectively reporting uncertainty sets. Given these insights, we propose that models should be equipped not only with confidence but also with an atypicality estimator for reliable uncertainty quantification. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value.

Chat is not available.