Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

Less is More: Discovering Concise Network Explanations

Neehar Kondapaneni · Markus Marks · Oisin Mac Aodha · Pietro Perona

Keywords: [ similarity ] [ XAI ] [ concept ] [ expert ] [ human ] [ Explanations ] [ interpretability ] [ features ]


Abstract:

In this work, we introduce Deep Conceptual Network Explanations (DCNE), an innovative approach designed to generate human-comprehensible visual explanations for the decision-making processes of deep neural image classifiers. Our method finds visual explanations that are critical for discriminating between classes. Our method is designed to simultaneously optimize three criteria: the explanations should be few, diverse, and human-interpretable. Our approach builds on the recently introduced Concept Relevance Propagation (CRP). While CRP is great at comprehensively describing individual neuronal activation, the resulting concepts are far too many to be comprehensible by humans. DCNE selects the few most important explanations from a classifier. To evaluate our method, we collected a novel dataset for classifying birds, comparing our method's explanations to human experts. Compared to existing XAI methods, our approach results in a desirable trade-off between conciseness and completeness when summarizing network explanations. It produces 1/30 of CRP's explanations with only a small loss in explanation quality. This represents a significant step forward in making neural network decisions accessible and interpretable to humans, providing a valuable tool for both researchers and practitioners in the field of explainable artificial intelligence (XAI).

Chat is not available.