Skip to yearly menu bar Skip to main content


Poster

Causal Concept Graph Models: Beyond Causal Opacity in Deep Learning

Gabriele Dominici · Pietro Barbiero · Mateo Espinosa Zarlenga · Alberto Termine · Martin Gjoreski · Giuseppe Marra · Marc Langheinrich

Hall 3 + Hall 2B #533
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Causal opacity denotes the difficulty in understanding the "hidden" causal structure underlying the decisions of deep neural network (DNN) models. This leads to the inability to rely on and verify state-of-the-art DNN-based systems, especially in high-stakes scenarios. For this reason, circumventing causal opacity in DNNs represents a key open challenge at the intersection of deep learning, interpretability, and causality. This work addresses this gap by introducing Causal Concept Graph Models (Causal CGMs), a class of interpretable models whose decision-making process is causally transparent by design. Our experiments show that Causal CGMs can: (i) match the generalisation performance of causally opaque models, (ii) enable human-in-the-loop corrections to mispredicted intermediate reasoning steps, boosting not just downstream accuracy after corrections but also the reliability of the explanations provided for specific instances, and (iii) support the analysis of interventional and counterfactual scenarios, thereby improving the model's causal interpretability and supporting the effective verification of its reliability and fairness.

Live content is unavailable. Log in and register to view live content