Poster
MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering
Jun Shern Chan · Neil Chowdhury · Oliver Jaffe · James Aung · Dane Sherburn · Evan Mays · Giulio Starace · Kevin Liu · Leon Maksin · Tejal Patwardhan · Aleksander Madry · Lilian Weng
Hall 3 + Hall 2B #367
Thu 24 Apr 12:30 a.m. PDT — 2 a.m. PDT
We introduce MLE-bench, a benchmark for measuring how well AI agents perform at machine learning engineering. To this end, we curate 75 ML engineering-related competitions from Kaggle, creating a diverse set of challenging tasks that test real-world ML engineering skills such as training models, preparing datasets, and running experiments. We establish human baselines for each competition using Kaggle's publicly available leaderboards. We use open-source agent scaffolds to evaluate several frontier language models on our benchmark, finding that the best-performing setup — OpenAI's o1-preview with AIDE scaffolding — achieves at least the level of a Kaggle bronze medal in 16.9% of competitions. In addition to our main results, we investigate various forms of resource-scaling for AI agents and the impact of contamination from pre-training. We open-source our benchmark code https://github.com/openai/mle-bench to facilitate future research in understanding the ML engineering capabilities of AI agents.
Live content is unavailable. Log in and register to view live content