Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Machine Learning

The Impacts of Labeling Biases on Fairness Criteria

Yiqiao Liao · Parinaz Naghizadeh


Abstract:

As we increasingly rely on artificially intelligent algorithms to aid or automate decision making, we face the challenge of ensuring that these algorithms do not exhibit or amplify our existing social biases. An issue complicating the design of such fair AI is that algorithms are trained on datasets that can themselves be tainted due to the social biases of prior (human or AI) decision makers. In this paper, we investigate the robustness of existing (group) fairness criteria when an algorithm is trained on data that is biased due to errors by prior decision makers in identifying qualified individuals from a disadvantaged group. This can be viewed as labeling bias in the data. We first analytically show that some constraints such as Demographic Parity remain robust when facing such statistical biases, while others like Equalized Odds are violated if trained on biased data. We also analyze the sensitivity of the firm's utility to these biases under each constraint. Finally, we provide numerical experiments on three real-world datasets (the FICO, Adult, and German credit score datasets) supporting our analytical findings.

Chat is not available.