Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Second Workshop on Representational Alignment (Re$^2$-Align)

Do Large Language Models Perceive Orderly Number Concepts as Humans?

Xuanjie Liu · Cong Zeng · Shengkun Tang · Ziyu Wang · Zhiqiang Xu · Gus Xia


Abstract:

Large language models (LLMs) have demonstrated powerful abilities in reasoning and mathematics. However, due to their black-box nature, conventional theoretical methods struggle to analyze their internal properties. As a result, researchers have turned to cognitive science perspectives, investigating how LLMs encode concepts that align with human cognition. Prior work has explored constructs such as time and spatial orientation, revealing the alignment between LLM representations and human cognition. Despite this progress, the important concept in human reasoning, namely numbers, is still under exploration. In this paper, we examine numerical concepts by introducing a metric, \textit{orderliness}, to assess how number embeddings are spatially arranged across LLM layers, drawing parallels to the human mental number line. Our experiments reveal that LLMs initially encode numerical order in a structured manner, as evidenced by high orderliness in shallow layers. Using our proposed metric, we observe a two-phase decline in orderliness across layers. Through further analysis of LLaMA 3.1, we identify this decline as being closely linked to contextualization and next-token prediction. Our findings shed light on how LLMs encode numerical concepts, offering a novel perspective on their internal representation of ordered information and its potential alignment with human numerical cognition.

Chat is not available.