Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-based Agents

Yun-Shiuan Chuang · Nikunj Harlalka · SIDDHARTH SURESH · Agam Goyal · Robert Hawkins · Sijia Yang · Dhavan Shah · Junjie Hu · Timothy Rogers


Abstract:

Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias - a phenomenon known as the ``wisdom of partisan crowds.'' Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence.

Chat is not available.