Poster
in
Workshop: World Models: Understanding, Modelling and Scaling
Memory Helps, but Confabulation Misleads: Understanding Streaming Events in Videos with MLLMs
Gengyuan Zhang · Mingcong Ding · Tong Liu · Yao Zhang · Volker Tresp
Keywords: [ Temporal Reasoning ] [ Event Understanding ] [ Video Understanding ] [ Multimodal Large Language Model ]
Multimodal large language models (MLLMs) have demonstrated strong performance in understanding videos holistically, yet their ability to process streaming videos—videos are treated as a sequence of visual events—remains underexplored. Intuitively, leveraging past events as memory can enrich contextual and temporal understanding of the current event. In this paper, we show that leveraging memories as contexts helps MLLMs better understand video events. However, because such memories rely on predictions of preceding events, they may contain misinformation, leading to confabulation and degraded performance. To address this, we propose a confabulation-aware memory modification method that mitigates confabulated memory for memory-enhanced event understanding.