Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on the Elements of Reasoning: Objects, Structure and Causality

Inductive Biases for Relational Tasks

Giancarlo Kerg · Sarthak Mittal · David Rolnick · Yoshua Bengio · Blake A Richards · Guillaume Lajoie


Abstract:

Current deep learning approaches have shown good in-distribution performance but struggle in out-of-distribution settings. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as required in many intelligence tests. In contrast, our brains are remarkably flexible at such tasks, an attribute that is likely linked to anatomical constraints on computations. Inspired by this, recent work has explored how enforcing that relational representations remain distinct from sensory representations can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by ``partitioned'' representations of relations and sensory details. We investigate inductive biases that ensure abstract relations are learned and represented distinctly from sensory data across several neural network architectures and show that they outperform existing architectures on out-of-distribution generalization for various relational tasks. These results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing relational computations.

Chat is not available.