Since its inception as a field of study roughly one decade ago, research in algorithmic fairness has exploded. Much of this work focuses on so-called "group fairness" notions, which address the relative treatment of different demographic groups. More theoretical work advocated for "individual fairness" which, speaking intuitively, requires that people who are similar, with respect to a given classification task, should be treated similarly by classifiers for that task. Both approaches face significant challenges: for example, provable incompatibility of natural fairness desiderata (for the group case), and the absence of similarity information (for the individual case).
The past two years have seen exciting developments on several fronts in theoretical computer science including: the investigation of scoring, classifying, ranking, and auditing fairness under definitions aiming to bridge the group and individual notions, and the construction of similarity metrics from (relatively few) queries to a human expert. A parallel vein of research in the ML community explores fairness via representations. This talk will motivate, highlight, and weave together threads from these recent contributions.