Poster
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee · Seungju Cho · Changick Kim
Hall 3 + Hall 2B #319
Adversarial training significantly improves adversarial robustness, but superior performance is primarily attained with large models. This substantial performance gap for smaller models has spurred active research into adversarial distillation (AD) to mitigate the difference. Existing AD methods leverage the teacher’s logits as a guide.In contrast to these approaches, we aim to transfer another piece of knowledge from the teacher, the input gradient.In this paper, we propose a distillation module termed Indirect Gradient Distillation Module (IGDM) that indirectly matches the student’s input gradient with that of the teacher.Experimental results show that IGDM seamlessly integrates with existing AD methods, significantly enhancing their performance.Particularly, utilizing IGDM on the CIFAR-100 dataset improves the AutoAttack accuracy from 28.06\% to 30.32\% with the ResNet-18 architecture and from 26.18\% to 29.32\% with the MobileNetV2 architecture when integrated into the SOTA method without additional data augmentation.
Live content is unavailable. Log in and register to view live content