Skip to yearly menu bar Skip to main content


Poster

Do Mice Grok? Glimpses of Hidden Progress in Sensory Cortex

Tanishq Kumar · Blake Bordelon · Cengiz Pehlevan · Venkatesh Murthy · Samuel Gershman

Hall 3 + Hall 2B #380
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Does learning of task-relevant representations stop when behavior stops changing? Motivated by recent work in machine learning and the intuitive observation that human experts continue to learn after mastery, we hypothesize that task-specific representation learning in cortex can continue, even when behavior saturates. In a novel reanalysis of recently published neural data, we find evidence for such learning in posterior piriform cortex of mice following continued training on a task, long after behavior saturates at near-ceiling performance ("overtraining"). We demonstrate that class representations in cortex continue to separate during overtraining, so that examples that were incorrectly classified at the beginning of overtraining can abruptly be correctly classified later on, despite no changes in behavior during that time. We hypothesize this hidden learning takes the form of approximate margin maximization; we validate this and other predictions in the neural data, as well as build and interpret a simple synthetic model that recapitulates these phenomena. We conclude by demonstrating how this model of late-time feature learning implies an explanation for the empirical puzzle of overtraining reversal in animal learning, where task-specific representations are more robust to particular task changes because the learned features can be reused.

Live content is unavailable. Log in and register to view live content