Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models

Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Representation Learning

Simon Schrodi · David Hoffmann · Max Argus · Volker Fischer · Thomas Brox


Abstract:

Contrastive vision-language models like CLIP have gained popularity for their rich representations, that are applicable in various downstream tasks. Despite their successes in some tasks, like zero-shot image recognition, they perform surprisingly poor on other tasks, like attribute detection. Previous work has attributed these challenges to the modality gap, a separation of image and text in the shared representation space, and a bias favoring objects over other factors, such as attributes. We investigate both phenomena. Specifically, we find an unintuitive correlation between the modality gap and downstream performance, with only a few embedding dimensions driving the gap. But how to determine what leads to the emergence of these phenomena? To answer this question we design a controlled setting which allows us to control the amount of shared information between the modalities. This revealed that the driving factor behind both, the modality gap and the object bias, is the information imbalance between images and captions.

Chat is not available.