Poster
Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Steve Azzolin · Antonio Longa · Stefano Teso · Andrea Passerini
Hall 3 + Hall 2B #508
As Graph Neural Networks (GNNs) become more pervasive, it becomes paramount to build reliable tools for explaining their predictions.A core desideratum is that explanations are faithful, i.e., that they portray an accurate picture of the GNN's reasoning process.However, a number of different faithfulness metrics exist, begging the question of what is faithfulness exactly and how to achieve it.We make three key contributions.We begin by showing that existing metrics are not interchangeable -- i.e., explanations attaining high faithfulness according to one metric may be unfaithful according to others -- and can systematically ignore important properties of explanations.We proceed to show that, surprisingly, optimizing for faithfulness is not always a sensible design goal. Specifically, we prove that for injective regular GNN architectures, perfectly faithful explanations are completely uninformative.This does not apply to modular GNNs, such as self-explainable and domain-invariant architectures, prompting us to study the relationship between architectural choices and faithfulness.Finally, we show that faithfulness is tightly linked to out-of-distribution generalization, in that simply ensuring that a GNN can correctly recognize the domain-invariant subgraph, as prescribed by the literature, does not guarantee that it is invariant unless this subgraph is also faithful.All our code can be found in the supplementary material.
Live content is unavailable. Log in and register to view live content