Skip to yearly menu bar Skip to main content


Poster

Provable Uncertainty Decomposition via Higher-Order Calibration

Gustaf Ahdritz · Aravind Gollakota · Parikshit Gopalan · Charlotte Peale · Udi Wieder

Hall 3 + Hall 2B #501
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract: We give a principled method for decomposing the predictive uncertainty of a model into aleatoric and epistemic components with explicit semantics relating them to the real-world data distribution. While many works in the literature have proposed such decompositions, they lack the type of formal guarantees we provide. Our method is based on the new notion of higher-order calibration, which generalizes ordinary calibration to the setting of higher-order predictors that predict _mixtures_ over label distributions at every point. We show how to measure as well as achieve higher-order calibration using access to k-snapshots, namely examples where each point has k independent conditional labels. Under higher-order calibration, the estimated aleatoric uncertainty at a point is guaranteed to match the real-world aleatoric uncertainty averaged over all points where the prediction is made. To our knowledge, this is the first formal guarantee of this type that places no assumptions whatsoever on the real-world data distribution. Importantly, higher-order calibration is also applicable to existing higher-order predictors such as Bayesian and ensemble models and provides a natural evaluation metric for such models. We demonstrate through experiments that our method produces meaningful uncertainty decompositions in tasks such as image classification.

Live content is unavailable. Log in and register to view live content