Skip to yearly menu bar Skip to main content


Poster

Churn Reduction via Distillation

Heinrich Jiang · Harikrishna Narasimhan · Dara Bahri · Andrew Cotter · Afshin Rostamizadeh

Keywords: [ constraints ] [ distillation ]


Abstract:

In real-world systems, models are frequently updated as more data becomes available, and in addition to achieving high accuracy, the goal is to also maintain a low difference in predictions compared to the base model (i.e. predictive churn). If model retraining results in vastly different behavior, then it could cause negative effects in downstream systems, especially if this churn can be avoided with limited impact on model accuracy. In this paper, we show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn. We then show that distillation performs strongly for low churn training against a number of recent baselines on a wide range of datasets and model architectures, including fully-connected networks, convolutional networks, and transformers.

Chat is not available.