Poster
CONDA: Adaptive Concept Bottleneck for Foundation Models Under Distribution Shifts
Jihye Choi · Jayaram Raghuram · Yixuan Li · Somesh Jha
Hall 3 + Hall 2B #493
Advancements in foundation models (FMs) have led to a paradigm shift in machinelearning. The rich, expressive feature representations from these pre-trained, large-scale FMs are leveraged for multiple downstream tasks, usually via lightweightfine-tuning of a shallow fully-connected network following the representation.However, the non-interpretable, black-box nature of this prediction pipeline can bea challenge, especially in critical domains, such as healthcare, finance, and security.In this paper, we explore the potential of Concept Bottleneck Models (CBMs)for transforming complex, non-interpretable foundation models into interpretabledecision-making pipelines using high-level concept vectors. Specifically, we focuson the test-time deployment of such an interpretable CBM pipeline “in the wild”,where the distribution of inputs often shifts from the original training distribution.We first identify the potential failure modes of such pipelines under different typesof distribution shifts. Then we propose an adaptive concept bottleneck frameworkto address these failure modes, that dynamically adapts the concept-vector bankand the prediction layer based solely on unlabeled data from the target domain,without access to the source dataset. Empirical evaluations with various real-worlddistribution shifts show our framework produces concept-based interpretationsbetter aligned with the test data and boosts post-deployment accuracy by up to28%, aligning CBM performance with that of non-interpretable classification.
Live content is unavailable. Log in and register to view live content