Skip to yearly menu bar Skip to main content


Poster

Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge

Jiayi Ye · Yanbo Wang · Yue Huang · Dongping Chen · Qihui Zhang · Nuno Moniz · Tian Gao · Werner Geyer · Chao Huang · Pin-Yu Chen · Nitesh Chawla · Xiangliang Zhang

Hall 3 + Hall 2B #561
[ ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

LLM-as-a-Judge has been widely utilized as an evaluation method in various benchmarks and served as supervised rewards in model training. However, despite their excellence in many domains, potential issues are under-explored, undermining their reliability and the scope of their utility. Therefore, we identify 12 key potential biases and propose a new automated bias quantification framework—CALM—which systematically quantifies and analyzes each type of bias in LLM-as-a-Judge by using automated and principle-guided modification. Our experiments cover multiple popular language models, and the results indicate that while advanced models have achieved commendable overall performance, significant biases persist in certain specific tasks. Empirical results suggest that there remains room for improvement in the reliability of LLM-as-a-Judge. Moreover, we also discuss the explicit and implicit influence of these biases and give some suggestions for the reliable application of LLM-as-a-Judge. Our work highlights the need for stakeholders to address these issues and remind users to exercise caution in LLM-as-a-Judge applications.

Live content is unavailable. Log in and register to view live content