Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Secure and Trustworthy Large Language Models

MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs

Yavuz Faruk Bakman · Duygu Nur Yaldiz · Baturalp Buyukates · Chenyang Tao · Dimitrios Dimitriadis · Salman Avestimehr


Abstract:

Generative Large Language Models (LLMs) are widely utilized for their excellence in various tasks. Eestimating the correctness of generative LLM outputs is an important task for enhanced reliability. Uncertainty Estimation (UE) in generative LLMs is an evolving domain, where SOTA probability-based methods commonly employ length-normalized scoring. In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. MARS is a novel scoring function that considers the semantic contribution of each token in the generated sequence in the context of the question. We demonstrate that integrating MARS into UE methods results in a universal and significant improvement in UE performance. Code can be found \href{https://github.com/Ybakman/LLM_Uncertainty} {here}.

Chat is not available.