Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

MAGIC: INVESTIGATION OF LARGE LANGUAGE MODEL POWERED MULTI-AGENT IN COGNITION, ADAPTABILITY, RATIONALITY AND COLLABORATION

Lin Xu · Zhiyuan Hu · Zhou Daquan · Hongyu Ren · Zhen Dong · Kurt Keutzer · See-Kiong Ng · Jiashi Feng


Abstract:

Large Language Models (LLMs) have marked a significant advancement in the field of natural language processing, demonstrating exceptional capabilities in reasoning, tool usage, and memory. As their applications extend into multi-agent environments, a need has arisen for a comprehensive evaluation framework that captures their reasoning, planning, collaboration, and more abilities. This work introduces a novel benchmarking framework specifically tailored to assess LLMs within multi-agent settings, providing quantitative metrics to evaluate their judgment, reasoning, deception, self-awareness, cooperation, coordination, and rationality.We utilize social deduction games, Chameleon and Undercover, alongside game theory scenarios like Cost Sharing, Multi-player Prisoner's Dilemma, and Public Good, to create diverse environments. Our framework is fortified with the probabilistic graphic modeling (PGM) method, enhancing the LLMs' capabilities in navigating complex social and cognitive dimensions. The benchmark evaluates 7 multi-agent systems powered by different LLMs, quantitatively highlighting a significant capability gap over threefold between the strongest, GPT-4, and the weakest, Llama-2-70B. It also confirms that our PGM enhancement boosts the inherent abilities of all selected models by 37\% on average. Our codes can be found in the anonymous link. https://anonymous.4open.science/r/magic_anonym-5366

Chat is not available.