Skip to yearly menu bar Skip to main content


Poster

Automatically Composing Representation Transformations as a Means for Generalization

Michael Chang · Abhishek Gupta · Sergey Levine · Thomas L Griffiths

Great Hall BC #83

Keywords: [ deep learning ] [ compositionality ] [ metareasoning ]


Abstract:

A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not.

Live content is unavailable. Log in and register to view live content