Towards Understanding the Data Dependency of Mixup-style Training

Muthu Chidambaram · Xiang Wang · Yuzheng Hu · Chenwei Wu · Rong Ge


Keywords: [ empirical risk minimization ] [ semi-supervised learning ] [ MixUp ] [ deep learning ] [ margin ] [ generalization ]

[ Abstract ]
[ Visit Poster at Spot D3 in Virtual World ] [ Slides [ OpenReview
Tue 26 Apr 6:30 p.m. PDT — 8:30 p.m. PDT
Spotlight presentation:


In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels. Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training. In this paper, we investigate how these benefits of Mixup training rely on properties of the data in the context of classification. For minimizing the original empirical risk, we compute a closed form for the Mixup-optimal classification, which allows us to construct a simple dataset on which minimizing the Mixup loss leads to learning a classifier that does not minimize the empirical loss on the data. On the other hand, we also give sufficient conditions for Mixup training to also minimize the original empirical risk. For generalization, we characterize the margin of a Mixup classifier, and use this to understand why the decision boundary of a Mixup classifier can adapt better to the full structure of the training data when compared to standard training. In contrast, we also show that, for a large class of linear models and linearly separable datasets, Mixup training leads to learning the same classifier as standard training.

Chat is not available.