Several recent papers have developed neural network program synthesizers by using supervised learning over large sets of randomly generated programs and specifications.
In this paper, we investigate the feasibility of this approach for program repair: given a specification and a candidate program assumed similar to a correct program for the specification, synthesize a program which meets the specification.
Working in the Karel domain with a dataset of synthetically generated candidates, we develop models that can make effective use of the extra information in candidate programs, achieving 40% error reduction compared to a baseline program synthesis model that only receives the specification and not a candidate program.
Live content is unavailable. Log in and register to view live content