Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Large Language Models for Agents

Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

Jintian Zhang · Xin Xu · Ningyu Zhang · Ruibo Liu · Bryan Hooi · Shumin Deng


Abstract:

As Natural Language Processing (NLP) systems are increasingly employed in intricate social environments, a pressing query emerges: Can these NLP systems mirror human-esque collaborative intelligence, in a multi-agent society consisting of multiple large language models (LLMs)? This paper probes the collaboration mechanisms among contemporary NLP systems by melding practical experiments with theoretical insights. We fabricate four unique 'societies' comprised of LLM agents, where each agent is characterized by a specific 'trait' (easy-going or overconfident) and engages in collaboration with a distinct `thinking pattern' (debate or reflection). Through evaluating these multi-agent societies on three benchmark datasets, we discern that certain collaborative strategies not only outshine previous top-tier approaches, but also optimize efficiency (using fewer API tokens). Moreover, our results further illustrate that LLM agents manifest human-like social behaviors, such as conformity and consensus reaching, mirroring social psychology theories and demonstrating the potential of human-AI interaction. In conclusion, we integrate insights from social psychology to contextualize the collaboration of LLM agents, inspiring further investigations into the collaboration mechanism for LLMs. We commit to sharing our code and datasets (already shared with an anonymous link), hoping to catalyze further research in this promising avenue.

Chat is not available.