Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Enhancing Robust Fairness via Confusional Spectral Regularization

Gaojie Jin · Sihao Wu · Jiaxu Liu · Tianjin Huang · Ronghui Mu

Hall 3 + Hall 2B #488
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Recent research has highlighted a critical issue known as robust fairness", where robust accuracy varies significantly across different classes, undermining the reliability of deep neural networks (DNNs). A common approach to address this has been to dynamically reweight classes during training, giving more weight to those with lower empirical robust performance. However, we find there is a divergence of class-wise robust performance between training set and testing set, which limits the effectiveness of these explicit reweighting methods, indicating the need for a principled alternative.In this work, we derive a robust generalization bound for the worst-class robust error within the PAC-Bayesian framework, accounting for unknown data distributions. Our analysis shows that the worst-class robust error is influenced by two main factors: the spectral norm of the empirical robust confusion matrix and the information embedded in the model and training set. While the latter has been extensively studied, we propose a novel regularization technique targeting the spectral norm of the robust confusion matrix to improve worst-class robust accuracy and enhance robust fairness.We validate our approach through comprehensive experiments on various datasets and models, demonstrating its effectiveness in enhancing robust fairness.

Live content is unavailable. Log in and register to view live content