Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

Weisfeiler–Leman at the margin: When more expressivity matters

Billy Franks · Christopher Morris · Ameya Velingker · Floris Geerts


Abstract: The Weisfeiler--Leman algorithm ($1\textsf{-WL}$) is a well-studied heuristic for the graph isomorphism problem. Recently, the algorithm has played a prominent role in understanding the expressive power of message-passing graph neural networks (MPNNs) and being effective as a graph kernel. Despite its success, the $1\textsf{-WL}$ faces challenges in distinguishing non-isomorphic graphs, leading to the development of more expressive MPNN and kernel architectures. However, the relationship between enhanced expressivity and improved generalization performance remains unclear. Here, we show that an architecture's expressivity offers limited insights into its generalization performance when viewed through graph isomorphism. Moreover, we focus on augmenting $1\textsf{-WL}$ and MPNNs with subgraph information and employ classical margin theory to investigate the conditions under which an architecture's increased expressivity aligns with improved generalization performance. In addition, we introduce variations of expressive $1\textsf{-WL}$-based kernel and MPNN architectures with provable generalization properties. Our empirical study confirms the validity of our theoretical findings.

Chat is not available.