Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

Optimizing for ROC Curves on Class-Imbalanced Data by Training over a Family of Loss Functions

Kelsey Lieberman · Shuai Yuan · Swarna Ravindran · Carlo Tomasi


Abstract:

Although binary classification is a well-studied problem in computer vision, training reliable classifiers under severe class imbalance remains a challenging problem. Vector Scaling (VS) loss is a general method for training under imbalance that has a strong theoretical backing. We, however, observe that there are important practical cases (binary problems with severe imbalance) where slight changes in hyperparameter values of VS loss can result in highly variable performance. Furthermore, we recognize that different hyperparameter values in VS loss optimize for different tradeoffs between minority-class accuracy and majority-class accuracy, so different hyperparameter values are better suited for different parts of the Receiver Operating Characteristic (ROC) curve. We propose to exploit the previous fact and train over a family of loss functions, instead of a single loss function. Extensive experimental results-- on both CIFAR and Kaggle competition datasets-- show that our method improves model performance and is more robust to hyperparameter choices.

Chat is not available.