Skip to yearly menu bar Skip to main content


Poster

QA-Calibration of Language Model Confidence Scores

Putra Manggala · Atalanti A Mastakouri · Elke Kirschbaum · Shiva Kasiviswanathan · Aaditya Ramdas

Hall 3 + Hall 2B #576
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

To use generative question-and-answering (QA) systems for decision-making and in any critical application, these systems need to provide well-calibrated confidence scores that reflect the correctness of their answers. Existing calibration methods aim to ensure that the confidence score is, on average, indicative of the likelihood that the answer is correct. We argue, however, that this standard (average-case) notion of calibration is difficult to interpret for decision-making in generative QA. To address this, we generalize the standard notion of average calibration and introduce QA-calibration, which ensures calibration holds across different question-and-answer groups. We then propose discretized posthoc calibration schemes for achieving QA-calibration. We establish distribution-free guarantees on the performance of this method and validate our method on confidence scores returned by elicitation prompts across multiple QA benchmarks and large language models (LLMs).

Live content is unavailable. Log in and register to view live content