Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICLR 2025 Workshop on Human-AI Coevolution

How Deeply Do LLMs Internalize Human Citation Practices? A Graph-Structural and Embedding-Based Evaluation

Melika Mobini · Vincent Holst · Floriano Tori · Andres Algaba · Vincent Ginis


Abstract:

As Large Language Models (LLMs) integrate into scientific workflows, understanding how they conceptualize the literature becomes critical. We compare LLM-generated citation suggestions with real references from top AI conferences (AAAI, NeurIPS, ICML, ICLR), analyzing key citation graph properties—centralities, clustering coefficients, and structural differences. Using OpenAI embeddings for paper titles, we quantify the alignment of LLM-generated citations with ground truth references. Our findings reveal that LLM-generated citations closely resemble human references in these distributional properties, deviating significantly from random baselines.

Chat is not available.