Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features

Po-han Li · Sandeep Chinchali · ufuk topcu

Hall 3 + Hall 2B #301
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Multimodal encoders like CLIP excel in tasks such as zero-shot image classification and cross-modal retrieval. However, they require excessive training data.We propose canonical similarity analysis (CSA), which uses two unimodal encoders to replicate multimodal encoders using limited data.CSA maps unimodal features into a multimodal space, using a new similarity score to retain only the multimodal information.CSA only involves the inference of unimodal encoders and a cubic-complexity matrix decomposition, eliminating the need for extensive GPU-based model training.Experiments show that CSA outperforms CLIP while requiring 50,000× fewer multimodal data pairs to bridge the modalities given pre-trained unimodal encoders on ImageNet classification and misinformative news caption detection.CSA surpasses the state-of-the-art method to map unimodal features to multimodal features.We also demonstrate the ability of CSA with modalities beyond image and text, paving the way for future modality pairs with limited paired multimodal data but abundant unpaired unimodal data, such as LiDAR and text.

Live content is unavailable. Log in and register to view live content