On Statistical Bias In Active Learning: How and When to Fix It

Sebastian Farquhar, Yarin Gal, Tom Rainforth

[ Abstract ] [ Livestream: Visit Oral Session 2 ] [ Paper ]

Active learning is a powerful tool when labelling data is expensive, but it introduces a bias because the training data no longer follows the population distribution. We formalize this bias and investigate the situations in which it can be harmful and sometimes even helpful. We further introduce novel corrective weights to remove bias when doing so is beneficial. Through this, our work not only provides a useful mechanism that can improve the active learning approach, but also an explanation for the empirical successes of various existing approaches which ignore this bias. In particular, we show that this bias can be actively helpful when training overparameterized models---like neural networks---with relatively modest dataset sizes.

Chat is not available.