Poster
Combining Induction and Transduction for Abstract Reasoning
Wen-Ding Li · Keya Hu · Carter Larsen · Yuqing Wu · Simon Alford · Caleb Woo · Spencer Dunn · Hao Tang · Wei-Long Zheng · Yewen Pu · Kevin Ellis
Hall 3 + Hall 2B #232
When learning an input-output mapping from very few examples, is it better to first infer a latent function that explains the examples, or is it better to directly predict new test outputs, e.g. using a neural network? We study this question on ARC by training neural models for \emph{induction} (inferring latent functions) and \emph{transduction} (directly predicting the test output for a given test input). We train on synthetically generated variations of Python programs that solve ARC training tasks. We find inductive and transductive models solve different kinds of test problems, despite having the same training problems and sharing the same neural architecture: Inductive program synthesis excels at precise computations, and at composing multiple concepts, while transduction succeeds on fuzzier perceptual concepts. Ensembling them approaches human-level performance on ARC.
Live content is unavailable. Log in and register to view live content