Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Static Prediction of Runtime Errors by Learning to Execute Programs with External Resource Descriptions

David Bieber · Rishab Goel · Daniel Zheng · Hugo Larochelle · Daniel Tarlow

MH1-2-3-4 #56

Keywords: [ Applications ] [ graph neural networks ] [ source code ] [ recurrent networks ] [ program analysis ] [ attention mechanisms ] [ program execution ]


Abstract:

The execution behavior of a program often depends on external resources, such as program inputs or file contents, and so the program cannot be run in isolation. Nevertheless, software developers benefit from fast iteration loops where automated tools identify errors as early as possible, even before programs can be compiled and run. This presents an interesting machine learning challenge: can we predict runtime errors in a "static" setting, where program execution is not possible? Here, we introduce a competitive programming dataset and task for predicting runtime errors, which we show is difficult for generic models like Transformers. We approach this task by developing an interpreter-inspired architecture with an inductive bias towards mimicking program executions, which models exception handling and "learns to execute" descriptions of external resources. Surprisingly, we show that the model can also predict the locations of errors, despite being trained only on labels indicating error presence or absence and kind. In total, we present a practical and difficult-yet-approachable challenge problem related to learning program execution behavior and we demonstrate promising new capabilities of interpreter-inspired machine learning models for code.

Chat is not available.