Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

GNN-VPA: A Variance-Preserving Aggregation Strategy for Graph Neural Networks

Lisa Schneckenreiter · Richard Freinschlag · Florian Sestak · Johannes Brandstetter · Günter Klambauer · Andreas Mayr


Abstract:

Graph neural networks (GNNs), and especially message-passing neural networks, excel in a variety of domains such as physics, drug discovery, and molecular modeling. The expressivity of GNNs with respect to their ability to discriminate non-isomorphic graphs critically depends on the functions employed for message aggregation and graph-level readout. By applying signal propagation theory, we propose a variance-preserving aggregation function (VPA) that maintains expressivity, but yields improved forward and backward dynamics. Experiments demonstrate that VPA leads to increased predictive performance for popular GNN architectures as well as improved learning dynamics. Our results could pave the way towards normalizer-free or self-normalizing GNNs.

Chat is not available.