Skip to yearly menu bar Skip to main content


Poster

Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification

Francisco Utrera · Evan Kravitz · N. Benjamin Erichson · Rajiv Khanna · Michael W Mahoney

Keywords: [ adversarial training ] [ transfer learning ] [ influence functions ] [ limited data ]


Abstract:

Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains. This process consists of taking a neural network pre-trained on a large feature-rich source dataset, freezing the early layers that encode essential generic image properties, and then fine-tuning the last few layers in order to capture specific information related to the target situation. This approach is particularly useful when only limited or weakly labeled data are available for the new task. In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models, especially if only limited data are available for the new domain task. Further, we observe that adversarial training biases the learnt representations to retaining shapes, as opposed to textures, which impacts the transferability of the source models. Finally, through the lens of influence functions, we discover that transferred adversarially-trained models contain more human-identifiable semantic information, which explains -- at least partly -- why adversarially-trained models transfer better.

Chat is not available.