Invited Talk
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning
Invited Talk 3 : Knowledge Distillation as Semiparametric Inference
Lester Mackey
More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory- constrained devices. Knowledge distillation alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. To explain and enhance this phenomenon, we cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate. By adapting modern semiparametric tools, we derive new guarantees for the prediction error of standard distillation and develop two enhancements—cross-fitting and loss correction—to mitigate the impact of teacher overfitting and underfitting on student performance. We validate our findings empirically on both tabular and image data and observe consistent improvements from our knowledge distillation enhancements.