Conditional Learning of Fair Representations

Han Zhao, Amanda Coston, Tameem Adel, Geoffrey J. Gordon

Keywords: fairness, representation learning

Thursday: Fairness, Interpretabiity and Deployment

Abstract: We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification.

Similar Papers

Rényi Fair Inference
Sina Baharlouei, Maher Nouiehed, Ahmad Beirami, Meisam Razaviyayn,
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework
Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, Jaime G. Carbonell,