Skip to yearly menu bar Skip to main content


Faithful Vision-Language Interpretation via Concept Bottleneck Models

Songning Lai · Lijie Hu · Junxiao Wang · Laure Berti-Equille · Di Wang

Halle B #231
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT


The demand for transparency in healthcare and finance has led to interpretable machine learning (IML) models, notably the concept bottleneck models (CBMs), valued for their potential in performance and insights into deep neural networks. However, CBM's reliance on manually annotated data poses challenges. Label-free CBMs have emerged to address this, but they remain unstable, affecting their faithfulness as explanatory tools. To address this issue of inherent instability, we introduce a formal definition for an alternative concept called the Faithful Vision-Language Concept (FVLC) model. We present a methodology for constructing an FVLC that satisfies four critical properties. Our extensive experiments on four benchmark datasets using Label-free CBM model architectures demonstrate that our FVLC outperforms other baselines regarding stability against input and concept set perturbations. Our approach incurs minimal accuracy degradation compared to the vanilla CBM, making it a promising solution for reliable and faithful model interpretation.

Chat is not available.