Skip to yearly menu bar Skip to main content


Poster

Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds

Jordan Ash · Akshay Krishnamurthy · John Langford · Alekh Agarwal · Chicheng Zhang


Abstract:

We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.

Chat is not available.