Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Large Language Models for Agents

Simulating Opinion Dynamics with Networks of LLM-based Agents

Yun-Shiuan Chuang · Agam Goyal · Nikunj Harlalka · SIDDHARTH SURESH · Robert Hawkins · Sijia Yang · Dhavan Shah · Junjie Hu · Timothy Rogers


Abstract:

Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs). Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility in understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs.

Chat is not available.