Skip to yearly menu bar Skip to main content


Poster

The "Law'' of the Unconscious Contrastive Learner: Probabilistic Alignment of Unpaired Modalities

Yongwei Che · Benjamin Eysenbach

Hall 3 + Hall 2B #328
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

While internet-scale data often come in pairs (e.g., audio+image, image+text), we often want to perform inferences over modalities unseen together in the training data (e.g., audio+text). Prior work has addressed this issue by learning multiple contrastive embedding spaces between existing modality pairs, implicitly hoping that unseen modality pairs will end up being aligned. This theoretical paper proves that this hope is well founded, under certain assumptions. Starting with the proper Bayesian approach of integrating out intermediate modalities, we show that directly comparing the representations of data from unpaired modalities can recover the same likelihood ratio. Our analysis builds on prior work on the geometry and probabilistic interpretation of contrastive representations, showing how these representations can answer many of the same inferences as probabilistic graphical models. Our analysis suggests two new ways of using contrastive representations: in settings with pre-trained contrastive models, and for handling language ambiguity in reinforcement learning. Our numerical experiments study the importance of our assumptions and demonstrate these new applications.

Live content is unavailable. Log in and register to view live content