Guiding Program Synthesis by Learning to Generate Examples

Larissa Laich, Pavol Bielik, Martin Vechev

Keywords: program synthesis

Abstract: A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well. This can be difficult to achieve as the specification provided by the end user is often limited, containing as few as one or two input-output examples. In this paper we address this challenge via an iterative approach that finds ambiguities in the provided specification and learns to resolve these by generating additional input-output examples. The main insight is to reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct. As a result, to train our probabilistic models, we can take advantage of the large amounts of data in the form of program outputs, which are often much easier to obtain than the corresponding ground-truth programs.

Similar Papers

Program Guided Agent
Shao-Hua Sun, Te-Lin Wu, Joseph J. Lim,
CLN2INV: Learning Loop Invariants with Continuous Logic Networks
Gabriel Ryan, Justin Wong, Jianan Yao, Ronghui Gu, Suman Jana,
Neural Module Networks for Reasoning over Text
Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, Matt Gardner,