Poster
Rationalizing and Augmenting Dynamic Graph Neural Networks
Guibin Zhang · Yiyan Qi · Ziyang Cheng · Yanwei Yue · Dawei Cheng · Jian Guo
Hall 3 + Hall 2B #632
[
Abstract
]
Thu 24 Apr 7 p.m. PDT
— 9:30 p.m. PDT
Abstract:
Graph data augmentation (GDA) has shown significant promise in enhancing the performance, generalization, and robustness of graph neural networks (GNNs). However, contemporary methodologies are often limited to static graphs, whose applicability on dynamic graphs—more prevalent in real-world applications—remains unexamined. In this paper, we empirically highlight the challenges faced by static GDA methods when applied to dynamic graphs, particularly their inability to maintain temporal consistency. In light of this limitation, we propose a dedicated augmentation framework for dynamic graphs, termed DyAug, which adaptively augments the evolving graph structure with temporal consistency awareness. Specifically, we introduce the paradigm of graph rationalization for dynamic GNNs, progressively distinguishing between causal subgraphs (\textit{rationale}) and the non-causal complement (\textit{environment}) across snapshots. We develop three types of environment replacement, including, spatial, temporal, and spatial-temporal, to facilitate data augmentation in the latent representation space, thereby improving the performance, generalization, and robustness of dynamic GNNs. Extensive experiments on six benchmarks and three GNN backbones demonstrate that DyAug can \textbf{(I)} improve the performance of dynamic GNNs by 0.89; \textbf{(II)} effectively counter targeted and non-targeted adversarial attacks with 6.2 performance boost; \textbf{(III)} make stable predictions under temporal distribution shifts.
Live content is unavailable. Log in and register to view live content