Skip to yearly menu bar Skip to main content


AgentBench: Evaluating LLMs as Agents

Xiao Liu · Hao Yu · Hanchen Zhang · Yifan Xu · Xuanyu Lei · Hanyu Lai · Yu Gu · Hangliang Ding · Kaiwen Men · Kejuan Yang · Shudan Zhang · Xiang Deng · Aohan Zeng · Zhengxiao Du · Chenhui Zhang · Sheng Shen · Tianjun Zhang · Yu Su · Huan Sun · Minlie Huang · Yuxiao Dong · Jie Tang

Halle B #119
[ ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT


The potential of Large Language Model (LLM) as agents has been widely acknowledged recently.Thus, there is an urgent need to quantitatively evaluate LLMs as agents on challenging tasks in interactive environments.We present AgentBench, a multi-dimensional benchmark that consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.Our extensive test over 29 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and many OSS competitors that are no larger than 70B.We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents.Improving instruction following and training on high quality multi-round alignment data could improve agent performance.And different from existing assumptions, training on code present ambivalent impacts on different agent tasks.Datasets, environments, and an integrated evaluation package for AgentBench are released at

Chat is not available.