Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Second Workshop on Representational Alignment (Re$^2$-Align)

Representations with a Purpose: Grounding Alignment in Use-Driven Questions

Martin Schrimpf


Abstract:

Artificial neural network models have become powerful testable hypotheses about how the brain gives rise to mind. Today's best vision and language models are already decent predictors of the neural responses from V1 through IT and across the human language network. To systematize progress we built Brain‑Score, an open platform with >100 neural and behavioral benchmarks for automated model evaluation. Large‑scale comparisons reveal a consistent trend: the higher a model's task performance – object categorization in vision, next‑word prediction in language – the more its internal representations resemble those found in the brain. Building on these insights, we build neuro-anatomically informed architectures that improve brain alignment and confer ML benefits such as adversarial robustness.

With this progress in NeuroAI, I will argue that representational alignment is ultimately a means to an end. Illustrating this view, I will showcase two use‑driven projects: (i) Topographic visual models predict the perceptual effects of causal interventions such as micro‑stimulation, opening a path toward model‑guided neuro‑prosthetics. (ii) Brain‑aligned language models identify the most impactful stimuli for human neuro-imaging experiments, streamlining experiment design. By grounding representational alignment in application‑oriented questions, NeuroAI can both deepen our mechanistic understanding of cognition and advance clinical translations for brain‑related disorders.

Chat is not available.