Training-free Counterfactual Explanation for Temporal Graph Model Inference
Abstract
Temporal graph neural networks (TGNN) extend graph neural networks to dynamic networks and have demonstrated strong predictive power. However, interpreting TGNN remains far less explored than their static-graph counterparts. This paper introduces TEMporal Graph eXplainer (TemGX), a training-free,post-hoc framework that help users interpret and understand TGNN behavior by discovering temporal subgraphs and their evolution that are responsible for TGNN output of interests.We introduce a class of explainability measures that extends influence maximization in terms of structural influence and time decay to model temporal influence. We formulate the explanation task as a constrained optimization problem, and propose fast algorithms to discover explanations with guarantees on their temporal explainability. Our experimental study verifies the effectiveness and efficiency of TemGX for TGNN explanation, compared with state-of-the-art explainers. We also showcase how TemGX supports inference queries for dynamic network analysis.