Skip to yearly menu bar Skip to main content


Poster

Scaling Large Language Model-based Multi-Agent Collaboration

Chen Qian · Zihao Xie · YiFei Wang · Wei Liu · Kunlun Zhu · Hanchen Xia · Yufan Dang · Zhuoyun Du · Weize Chen · Cheng Yang · Zhiyuan Liu · Maosong Sun

Hall 3 + Hall 2B #236
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Recent breakthroughs in large language model-driven autonomous agents have revealed that multi-agent collaboration often surpasses each individual through collective reasoning. Inspired by the neural scaling law—increasing neurons enhances performance, this study explores whether the continuous addition of collaborative agents can yield similar benefits. Technically, we utilize directed acyclic graphs to organize agents into a multi-agent collaboration network (MacNet), upon which their interactive reasoning is topologically orchestrated for autonomous task solving. Extensive evaluations reveal that it effectively supports collaboration among over a thousand agents, with irregular topologies outperforming regular ones. We also identify a collaborative scaling law—the overall performance follows a logistic growth pattern as agents scale, with collaborative emergence occurring earlier than traditional neural emergence. We speculate this may be because scaling agents catalyzes their multidimensional considerations during interactive reflection and refinement, thereby producing more comprehensive artifacts. The code is available at https://github.com/OpenBMB/ChatDev/tree/macnet.

Live content is unavailable. Log in and register to view live content