Skip to yearly menu bar Skip to main content


Poster

C-CLIP: Multimodal Continual Learning for Vision-Language Model

Wenzhuo Liu · Fei Zhu · Longhui Wei · Qi Tian

Hall 3 + Hall 2B #625
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Multimodal pre-trained models like CLIP need large image-text pairs for training but often struggle with domain-specific tasks. Since retraining with specialized and historical data incurs significant memory and time costs, it is important to continually learn new domains in the open world while preserving original performance. However, current continual learning research mainly focuses on single-modal scenarios, and the evaluation criteria are insufficient without considering image-text matching performance and the forgetting of zero-shot performance. This work introduces image-caption datasets from various domains and establishes a multimodal vision-language continual learning benchmark. Then, a novel framework named C-CLIP is proposed, which not only prevents forgetting but also enhances new task learning impressively. Comprehensive experiments demonstrate that our method has strong continual learning ability across different domain image-text datasets, and has little forgetting of the original capabilities of zero-shot prediction, significantly outperforming existing methods.

Live content is unavailable. Log in and register to view live content