Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Graph transformers express monadic second-order logic

Tamara Drucks · Mahito Sugiyama

Keywords: [ transformers ] [ expressivity ] [ graph machine learning ] [ logic ]


Abstract:

We quantify the expressive power of graph transformers by establishing a formal connection to monadic second-order logic (MSO). Expressivity analysis for graph learning algorithms commonly focuses on their ability to produce distinct embeddings for non-isomorphic graphs. This line of research is well established for message-passing graph neural networks and their tight connection to the Weisfeiler-Leman color refinement algorithm. In contrast, graph transformers, a graph generative model, have mostly been evaluated empirically, with little theoretical investigation thus far. The expressive power of transformers can be quantified by their ability to simulate certain formal languages in the context of natural language processing. Here, we focus on graph learning and MSO and show that transformers can produce distinct embeddings for graphs that differ in MSO-definable properties. MSO is a powerful logic for graph-related tasks, as it allows to decide decision problems for graphs with bounded treewidth in linear time.

Chat is not available.