Skip to yearly menu bar Skip to main content


Workshop

Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks

Zheng Xu · Yen-Chang Hsu · Jiawei Huang

East Meeting Level 8 + 15 #18

Wed 2 May, 4:30 p.m. PDT

There is an increasing interest on accelerating neural networks for real-time applications. We study the student-teacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The experiments on three different image datasets show the student network gain a performance boost with proposed training strategy.

Live content is unavailable. Log in and register to view live content