Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Re-weighting Based Group Fairness Regularization via Classwise Robust Optimization

Sangwon Jung · Taeeon Park · Sanghyuk Chun · Taesup Moon

MH1-2-3-4 #138

Keywords: [ Social Aspects of Machine Learning ] [ DRO ] [ Group Fairness ]


Abstract:

Many existing group fairness-aware training methods aim to achieve the group fairness by either re-weighting underrepresented groups based on certain rules or using weakly approximated surrogates for the fairness metrics in the objective as regularization terms. Although each of the learning schemes has its own strength in terms of applicability or performance, respectively, it is difficult for any method in the either category to be considered as a gold standard since their successful performances are typically limited to specific cases. To that end, we propose a principled method, dubbed as FairDRO, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective using a classwise distributionally robust optimization (DRO) framework. We then develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group. Our experiments show that FairDRO is scalable and easily adaptable to diverse applications, and consistently achieves the state-of-the-art performance on several benchmark datasets in terms of the accuracy-fairness trade-off, compared to recent strong baselines.

Chat is not available.