Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)
Context-Sensitive Semantic Reasoning in Large Language Models
Tyler Giallanza · Declan Campbell
Keywords: [ large language models ] [ semantic reasoning ] [ semantic knowledge ] [ context ] [ semantic cognition ] [ attention ]
The development of large language models (LLMs) holds promise for increasing the scale and breadth of experiments probing human cognition. LLMs will be useful for studying the human mind to the extent that their behaviors and their representations are aligned with humans. Here we test this alignment by measuring the degree to which LLMs reproduce the context-sensitivity demonstrated by humans in semantic reasoning tasks. We show in two simulations that, like humans, the behavior of leading LLMs is sensitive to both local context and task context, reasoning about the same item differently when it is presented in different contexts or tasks. However, the representations derived from LLM embeddings do not exhibit this context sensitivity. These results suggest that LLMs may provide useful models of context-dependent human behavior, but cognitive scientists should be cautious when assuming that embeddings reflect the same context sensitivity. More broadly, the results demonstrate that behavioral alignment between two intelligent systems does not necessitate representational alignment.