Skip to yearly menu bar Skip to main content


Poster

Human Simulacra: Benchmarking the Personification of Large Language Models

Qiujie Xie · Qiming Feng · Tianqi Zhang · Qingqiu Li · Linyi Yang · Yuejie Zhang · Rui Feng · Liang He · Shang Gao · Yue Zhang

Hall 3 + Hall 2B #213
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Large Language Models (LLMs) are recognized as systems that closely mimic aspects of human intelligence. This capability has attracted the attention of the social science community, who see the potential in leveraging LLMs to replace human participants in experiments, thereby reducing research costs and complexity. In this paper, we introduce a benchmark for LLMs personification, including a strategy for constructing virtual characters' life stories from the ground up, a Multi-Agent Cognitive Mechanism capable of simulating human cognitive processes, and a psychology-guided evaluation method to assess human simulations from both self and observational perspectives. Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters. We hope this work will serve as a benchmark in the field of human simulation, paving the way for future research.

Live content is unavailable. Log in and register to view live content