Skip to yearly menu bar Skip to main content


Poster

Synergy and Diversity in CLIP: Enhancing Performance Through Adaptive Backbone Ensembling

Cristian Rodriguez-Opazo · Ehsan Abbasnejad · Damien Teney · Hamed Damirchi · Edison Marrese-Taylor · Anton Hengel

Hall 3 + Hall 2B #94
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning. Various architectures, from vision transformers~(ViTs) to convolutional networks (ResNets) have been trained with CLIP to serve as general solutions to diverse vision tasks.This paper explores the differences across various CLIP-trained vision backbones.Despite using the same data and training objective, we find that these architectures have notably different representations,different classification performance across datasets, and different robustness properties to certain types of image perturbations.Our findings indicate a remarkable possible synergy across backbonesby leveraging their respective strengths.In principle, classification accuracy could be improved by over 40 percentage with an informed selection of the optimal backbone per test example. Using this insight, we develop a straightforward yet powerful approach to adaptively ensemble multiple backbones.The approach uses as few as one labeled example per classto tune the adaptive combination of backbones.On a large collection of datasets, the method achieves a remarkable increase in accuracy of up to 39.1\% over the best single backbone, well beyond traditional ensembles.

Live content is unavailable. Log in and register to view live content