Skip to yearly menu bar Skip to main content


Poster

INViTE: INterpret and Control Vision-Language Models with Text Explanations

Haozhe Chen · Junfeng Yang · Carl Vondrick · Chengzhi Mao

Halle B #97
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Large-scale pre-trained vision foundation models, such as CLIP, have become de facto backbones for various vision tasks. However, due to their black-box nature, understanding the underlying rules behind these models’ predictions and controlling model behaviors have remained open challenges. We present INViTE: a framework for INterpreting Vision Transformer’s latent tokens with Text Explanations. Given a latent token, INViTE retains its semantic information to the final layer using transformer’s local operations and retrieves the closest text for explanation. INViTE enables understanding of model visual reasoning procedure without needing additional model training or data collection. Based on the obtained interpretations, INViTE allows for model editing that controls model reasoning behaviors and improves model robustness against biases and spurious correlations. Our code is available at https://github.com/tonychenxyz/vit-interpret.

Chat is not available.