Skip to yearly menu bar Skip to main content


Poster

Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View

Xuan Liu · Jie ZHANG · HaoYang Shang · Song Guo · Chengxu Yang · Quanyan Zhu

Hall 3 + Hall 2B #470
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias; whether this is reflected in the decision-making process of LLM agents remains under-explored. As LLM Agents are increasingly employed in intricate social environments, a pressing and natural question emerges: Can we utilize LLM Agents' systematic hallucinations to mirror human cognitive biases, thus exhibiting irrational social intelligence? In this paper, we probe the irrational behavior among contemporary LLM agents by melding practical social science experiments with theoretical insights. Specifically, we propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents’ social intelligence through cognitive biases. Experimental results on CogMir subsets show that LLM Agents and humans exhibit high consistency in irrational and prosocial decision-making under uncertain conditions, underscoring the prosociality of LLM Agents as social entities and highlighting the significance of hallucination properties. Additionally, CogMir framework demonstrates its potential as a valuable platform for encouraging more research into the social intelligence of LLM Agents.

Live content is unavailable. Log in and register to view live content