Poster
MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents
Yanqi Dai · Huanran Hu · Lei Wang · Shengjie Jin · Xu Chen · Zhiwu Lu
Hall 3 + Hall 2B #238
Recently, Role-Playing Agents (RPAs) have garnered increasing attention for their potential to deliver emotional value and facilitate sociological research.However, existing studies are primarily confined to the textual modality, unable to simulate humans' multimodal perceptual capabilities.To bridge this gap, we introduce the concept of Multimodal Role-Playing Agents (MRPAs), and propose a comprehensive framework, MMRole, for their development and evaluation, which comprises a personalized multimodal dataset and a robust evaluation approach.Specifically, we construct a large-scale, high-quality dataset, MMRole-Data, consisting of 85 characters, 11K images, and 14K single or multi-turn dialogues.Additionally, we present a robust evaluation approach, MMRole-Eval, encompassing eight metrics across three dimensions, where a reward model is designed to score MRPAs with the constructed ground-truth data for comparison.Moreover, we develop the first specialized MRPA, MMRole-Agent.Extensive evaluation results demonstrate the improved performance of MMRole-Agent and highlight the primary challenges in developing MRPAs, emphasizing the need for enhanced multimodal understanding and role-playing consistency.The data, code, and models are all available at https://github.com/YanqiDai/MMRole.
Live content is unavailable. Log in and register to view live content