Poster
in
Workshop: XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge
Understanding Information Flow in Graph Transformers via Attention Graphs
Batu El · Deepro Choudhury · Pietro Lio · Chaitanya Joshi
We introduce Attention Graphs, a new tool for mechanistic interpretability of Graph Neural Networks (GNNs) and Graph Transformers based on the mathematical equivalence between message passing in GNNs and the self-attention mechanism in Transformers. Attention graphs aggregate attention matrices across Transformer layers and heads to describe how information flows among input nodes. Through experiments on homophilous and heterophilous node classification tasks, we find that: (1) When Graph Transformers are allowed to learn the optimal graph structure using all-to-all attention among input nodes, the attention graphs learned by the model do not tend to correlate with the input/original graph structure; and (2) For heterophilous graphs, different Graph Transformer variants can achieve similar performance while utilising distinct information flow patterns.