Generalization beyond the training distribution in brains and machines

Christina Funke, Judith Borowski, Drew Linsley, Xavier Boix


Deep Neural Networks (DNNs) are the leading approach for nearly all domains of machine learning and computer vision, with performance at times rivaling human perception. However, there is consensus that these models are outmatched by the robustness and versatility of biological brains. DNNs are sensitive to so-called shifts of the training distribution, where systematic differences between the train and test sets can significantly degrade performance. Distributional shifts can be induced by random or structured (adversarial) perturbations, changes in object or scene viewpoint, illumination, or color, and novel compositions of familiar features. These issues are magnified in domains where training data is scarce. In contrast, flexible and efficient generalization is a hallmark of biological perception and intelligence. We believe that the algorithms implemented in biological brains offer clues for how to construct artificial intelligence that can generalize beyond the training distribution.
The limited generalization of neural networks is a critical problem for artificial intelligence, in applications ranging from automated driving and biomedical image analysis, and domains like reinforcement learning, control, and representational theory. Our goal is to address these issues by creating synergies among neuroscientists, cognitive scientists, and artificial intelligence researchers that might lead to novel solutions to this problem or emphasize relevant existing classical work.

Chat is not available.

Timezone: »


Fri 6:00 a.m. - 6:05 a.m.
Opening remarks and introduction of Shimon Ullman (Opening remarks)
Fri 6:05 a.m. - 6:40 a.m.
Shimon Ullman (Talk)
Fri 6:40 a.m. - 6:41 a.m.
Introduction of Ida Mommenejad (Introduction)
Fri 6:41 a.m. - 7:16 a.m.
Ida Mommenejad (Talk)
Fri 7:16 a.m. - 7:25 a.m.
Q&A, Shimon Ullman and Ida Mommenejad (Q&A)
Fri 7:25 a.m. - 7:45 a.m.
 link »

Interact with others: informally discuss ideas, meet people and network.

At the big tables, we suggest the following questions as starting points:

Red table: Do we need to look at biological systems to improve generalisation?

Green table: Are current benchmarks helpful? Which benchmarks do you like best?

White table: What are key problems to address in order to improve out-of-distribution generalisation?

Further, check out the bar, picnic and coffee tables!

Fri 7:45 a.m. - 8:00 a.m.
Coffee Break (Break)
Fri 8:00 a.m. - 8:01 a.m.
Introduction of Margaret Livingstone (Introduction)
Fri 8:01 a.m. - 8:36 a.m.
Margaret Livingstone (Talk)
Fri 8:36 a.m. - 8:37 a.m.
Introduction of Pawan Sinha (Introduction)
Fri 8:37 a.m. - 9:12 a.m.
Pawan Sinha (Talk)   
Pawan Sinha
Fri 9:12 a.m. - 9:20 a.m.
Q&A Margaret Livingstone and Pawan Sinha (Talk)
Fri 9:20 a.m. - 9:50 a.m.
Panel discussion
Fri 9:50 a.m. - 10:50 a.m.
Lunch Break (Break)
Fri 10:50 a.m. - 10:51 a.m.
Introduction of Brenden Lake (Introduction)
Fri 10:51 a.m. - 11:26 a.m.
Brenden Lake (Talk)   
Brenden Lake
Fri 11:26 a.m. - 11:27 a.m.
Introduction of Kimberley Stachenfeld (Introduction)
Fri 11:27 a.m. - 12:02 p.m.
Kimberley Stachenfeld (Talk)
Kimberly Stachenfeld
Fri 12:02 p.m. - 12:03 p.m.
Introduction of Thomas Serre (Introduction)
Fri 12:03 p.m. - 12:38 p.m.
Thomas Serre (Talk)   
Thomas Serre
Fri 12:38 p.m. - 12:50 p.m.
Q&A Brenden Lake, Kimberley Stachenfeld, Thomas Serre (Q&A)
Fri 12:50 p.m. - 1:00 p.m.
Coffee Break (Break)
Fri 1:00 p.m. - 1:30 p.m.

Please find the zoom links to the different poster sessions in the link below:

Note: Please do not circulate this information outside of the paywall.

Fri 1:30 p.m. - 1:31 p.m.
Introduction of Aleksander Madry (Introduction)
Fri 1:31 p.m. - 2:06 p.m.
Aleksander Madry (Talk)   
Aleksander Madry
Fri 2:06 p.m. - 2:11 p.m.
Q&A Aleksander Madry (Q&A)
Fri 2:11 p.m. - 2:12 p.m.
Introduction of Spandan Madan (Introduction)
Fri 2:12 p.m. - 2:47 p.m.
Spandan Madan (Talk)
Fri 2:47 p.m. - 2:52 p.m.
Q&A Spandan Madan (Q&A)
Fri 2:52 p.m. - 2:57 p.m.
Closing remarks