Skip to yearly menu bar Skip to main content


A Framework and Benchmark for Deep Batch Active Learning for Regression

David Holzmüller · Viktor Zaverkin · Johannes Kästner · Ingo Steinwart

Halle B #218
[ ] [ Project Page ]
[ Poster [ JMLR
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT


The acquisition of labels for supervised learning can be expensive. To improve the sample efficiency of neural network regression, we study active learning methods that adaptively select batches of unlabeled data for labeling. We present a framework for constructing such methods out of (network-dependent) base kernels, kernel transformations, and selection methods. Our framework encompasses many existing Bayesian methods based on Gaussian process approximations of neural networks as well as non-Bayesian methods. Additionally, we propose to replace the commonly used last-layer features with sketched finite-width neural tangent kernels and to combine them with a novel clustering method. To evaluate different methods, we introduce an open-source benchmark consisting of 15 large tabular regression data sets. Our proposed method outperforms the state-of-the-art on our benchmark, scales to large data sets, and works out-of-the-box without adjusting the network architecture or training code. We provide open-source code that includes efficient implementations of all kernels, kernel transformations, and selection methods, and can be used for reproducing our results.

Chat is not available.