Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Learning Interleaved Image-Text Comprehension in Vision-Language Large Models

Chenyu Zhou · Mengdan Zhang · Peixian Chen · Chaoyou Fu · Yunhang Shen · Xiawu Zheng · Xing Sun · Rongrong Ji

Hall 3 + Hall 2B #588
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: The swift progress of Multi-modal Large Models (MLLMs) has showcased their impressive ability to tackle tasks blending vision and language.Yet, most current models and benchmarks cater to scenarios with a narrow scope of visual and textual contexts.These models often fall short when faced with complex comprehension tasks, which involve navigating through a plethora of irrelevant and potentially misleading information in both text and image forms.To bridge this gap, we introduce a new, more demanding task known as Interleaved Image-Text Comprehension (IITC).This task challenges models to discern and disregard superfluous elements in both images and text to accurately answer questions and to follow intricate instructions to pinpoint the relevant image.In support of this task, we further craft a new VEGA dataset, tailored for the IITC task on scientific content, and devised a subtask, Image-Text Association (ITA), to refine image-text correlation skills.Our evaluation of four leading closed-source models, as well as various open-source models using VEGA, underscores the rigorous nature of IITC.Even the most advanced models, such as Gemini-1.5-pro and GPT4V, only achieved modest success.By employing a multi-task, multi-scale post-training strategy, we have set a robust baseline for MLLMs on the IITC task, attaining an 85.8% accuracy rate in image association and a 0.508 Rouge score. These results validate the effectiveness of our dataset in improving MLLMs capabilities for nuanced image-text comprehension.

Live content is unavailable. Log in and register to view live content