Skip to yearly menu bar Skip to main content


Poster

Provably Robust Explainable Graph Neural Networks against Graph Perturbation Attacks

Jiate Li · Meng Pang · Yun Dong · Jinyuan Jia · Binghui Wang

Hall 3 + Hall 2B #503
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Explaining Graph Neural Network (XGNN) has gained growing attention to facilitate the trust of using GNNs, which is the mainstream method to learn graph data. Despite their growing attention, Existing XGNNs focus on improving the explanation performance, and its robustness under attacks is largely unexplored. We noticed that an adversary can slightly perturb the graph structure such that the explanation result of XGNNs is largely changed. Such vulnerability of XGNNs could cause serious issues particularly in safety/security-critical applications. In this paper, we take the first step to study the robustness of XGNN against graph perturbation attacks, and propose XGNNCert, the first provably robust XGNN. Particularly, our XGNNCert can provably ensure the explanation result for a graph under the worst-case graph perturbation attack is close to that without the attack, while not affecting the GNN prediction, when the number of perturbed edges is bounded. Evaluation results on multiple graph datasets and GNN explainers show the effectiveness of XGNNCert.

Live content is unavailable. Log in and register to view live content