Oral
in
Workshop: PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data

Maximizing Entropy on Adversarial Examples Can Improve Generalization

Amrith Setlur · Benjamin Eysenbach

[ Abstract ] [ Project Page ]
[ OpenReview
Fri 29 Apr 2:30 p.m. PDT — 2:40 p.m. PDT
 
presentation: PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data
Fri 29 Apr 9 a.m. PDT — 6 p.m. PDT

Abstract:

Supervised learning methods that directly optimize the cross entropy loss on training data often overfit. This overfitting is typically mitigated through regularizing the loss function (e.g., label smoothing) or by minimizing the same loss on new examples (e.g., data augmentation and adversarial training). In this work, we propose a complementary regularization strategy: Maximum Predictive Entropy (MPE) forcing the model to be uncertain on new, algorithmically-generated inputs. Across a range of tasks, we demonstrate that our computationally-efficient method improves test accuracy, and the benefits are complementary to methods such as label smoothing and data augmentation.

Chat is not available.