Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

R-Judge: Benchmarking Safety Risk Awareness for LLM Agents

Tongxin Yuan · Zhiwei He · Lingzhong Dong · Yiming Wang · Ruijie Zhao · Tian Xia · Lizhen Xu · Binglin Zhou · Li Fangqi · Zhuosheng Zhang · Rui Wang · Gongshen Liu


Abstract:

Large language models (LLMs) have exhibited great potential in autonomously completing tasks across real-world applications. Despite this, these LLM agents introduce unexpected safety risks when operating in interactive environments. Instead of centering on LLM-generated content safety in most prior studies, this work addresses the imperative need for benchmarking the behavioral safety of LLM agents within diverse environments. We introduce R-Judge, a benchmark crafted to evaluate the proficiency of LLMs in judging and identifying safety risks given agent interaction records. R-Judge comprises 162 records of multi-turn agent interaction, encompassing 27 key risk scenarios among 7 application categories and 10 risk types. It incorporates human consensus on safety with annotated safety labels and high-quality risk descriptions. Utilizing R-Judge, we conduct a comprehensive evaluation of 8 prominent LLMs commonly employed as the backbone for agents. The best-performing model, GPT-4, achieves 72.52% in contrast to the human score of 89.07%, while all other models score less than the random, showing considerable room for enhancing the risk awareness of LLMs. Moreover, further experiments demonstrate that straightforward prompting mechanisms fail to improve model performance. With case studies, we reveal that correlated to parameter amount, risk awareness in open agent scenarios is a multi-dimensional capability involving knowledge and reasoning, thus challenging for current LLMs. We anticipate R-Judge will facilitate safe development of LLM agents. R-Judge is publicly available at Anonymous.

Chat is not available.