Poster
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Seung Hyun Cheon · Anneke Wernerfelt · Sorelle Friedler · Berk Ustun
Hall 3 + Hall 2B #524
Machine learning models are often used to automate or support decisions in applications such as lending and hiring. In such settings, consumer protection rules mandate that we provide consumers who receive adverse decisions with a list of "principal reasons." In practice, lenders and employers identify principal reasons as the top-scoring features from a feature attribution method. In this work, we study how such practices align with one of the underlying goals of consumer protection -- recourse -- i.e., educating individuals on how to achieve a desired outcome. We show that standard attribution methods can highlight features that will not lead to recourse -- providing individuals with reasons without recourse. We propose to score features on the basis of responsiveness, i.e., the proportion of interventions that can lead to a desired outcome. We develop efficient methods to compute responsiveness scores for any model and any dataset under complex actionability constraints. We present an empirical study on the responsiveness of explanations in lending, and demonstrate how responsiveness scores can highlight features that support recourse and mitigate harm by flagging instances with fixed predictions.
Live content is unavailable. Log in and register to view live content