Skip to yearly menu bar Skip to main content


Poster

Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?

Yifan Feng · Chengwu Yang · Xingliang Hou · Shaoyi Du · Shihui Ying · Zongze Wu · Yue Gao

Hall 3 + Hall 2B #528
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Existing benchmarks like NLGraph and GraphQA evaluate LLMs on graphs by focusing mainly on pairwise relationships, overlooking the high-order correlations found in real-world data. Hypergraphs, which can model complex beyond-pairwise relationships, offer a more robust framework but are still underexplored in the context of LLMs. To address this gap, we introduce LLM4Hypergraph, the first comprehensive benchmark comprising 21,500 problems across eight low-order, five high-order, and two isomorphism tasks, utilizing both synthetic and real-world hypergraphs from citation networks and protein structures. We evaluate six prominent LLMs, including GPT-4o, demonstrating our benchmark’s effectiveness in identifying model strengths and weaknesses. Our specialized prompt- ing framework incorporates seven hypergraph languages and introduces two novel techniques, Hyper-BAG and Hyper-COT, which enhance high-order reasoning and achieve an average 4% (up to 9%) performance improvement on structure classification tasks. This work establishes a foundational testbed for integrating hypergraph computational capabilities into LLMs, advancing their comprehension.

Live content is unavailable. Log in and register to view live content