Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

Human-like Geometric Abstraction in Large Pre-Trained Neural Networks

Declan Campbell · Sreejan Kumar · Tyler Giallanza · Jonathan Cohen · Thomas L. Griffiths

Keywords: [ multimodal models ] [ geometric reasoning ] [ abstraction ]


Abstract:

Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry. Recent research in cognitive science suggests neural networks do not share this capacity, instead proposing that human geometric reasoning abilities come from discrete symbolic structure in human mental representations. However, progress in artificial intelligence (AI) suggests that neural networks begin to demonstrate more human-like reasoning after scaling up standard architectures in both model size and amount of training data. In this study, we revisit empirical results in cognitive science on geometric visual processing and identify three key biases in geometric visual processing: a sensitivity towards complexity, a preference for regularity, and the perception of parts and relations. We test tasks from the literature that probe these biases in humans and find that large pre-trained neural network models used in modern forms of AI demonstrate human-like biases in abstract geometric processing.

Chat is not available.