Oral
in
Workshop: Secure and Trustworthy Large Language Models
Calibrating Language Models With Adaptive Temperature Scaling
Johnathan Xie · Annie Chen · Yoonho Lee · Eric Mitchell · Chelsea Finn
The effectiveness of large language models (LLMs) is not only measured by theirability to generate accurate outputs but also by their calibration—how well theirconfidence scores reflect the probability of their outputs being correct. Whileunsupervised pre-training has been shown to yield LLMs with well-calibratedconditional probabilities, recent studies have shown that after fine-tuning withreinforcement learning from human feedback (RLHF), the calibration of thesemodels degrades significantly. In this work, we introduce Adaptive TemperatureScaling (ATS), a post-hoc calibration method that predicts a temperature scalingparameter for each token prediction. The predicted temperature values adapt basedon token-level features and are fit over a standard supervised fine-tuning (SFT)dataset. The adaptive nature of ATS addresses the varying degrees of calibrationshift that can occur after RLHF fine-tuning. ATS improves calibration by over 10-50% across three downstream natural language evaluation benchmarks comparedto prior calibration methods and does not impede performance improvements fromRLHF.