Skip to yearly menu bar Skip to main content


Poster
in
Workshop: PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data

Maximizing entropy on adversarial examples can improve generalization

Amrith Setlur · Benjamin Eysenbach


Abstract:

Supervised learning methods that directly optimize the cross entropy loss on training data often overfit. This overfitting is typically mitigated through regularizing the loss function (e.g., label smoothing) or by minimizing the same loss on new examples (e.g., data augmentation and adversarial training). In this work, we propose a complementary regularization strategy: Maximum Predictive Entropy (MPE) forcing the model to be uncertain on new, algorithmically-generated inputs. Across a range of tasks, we demonstrate that our computationally-efficient method improves test accuracy, and the benefits are complementary to methods such as label smoothing and data augmentation.

Chat is not available.