Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Machine Learning

Incentive Mechanisms in Strategic Learning

Kun Jin · Xueru Zhang · Mohammad Mahdi Khalili · Parinaz Naghizadeh · Mingyan Liu


Abstract:

We study the design of a class of incentive mechanisms that can effectively improve algorithm robustness in strategic learning. A conventional strategic learning problem is modeled as a Stackelberg game between an algorithm designer (a principal, or decision maker) and individual agents subject to the algorithm's decisions, potentially from different demographic groups. While the former benefits from the decision accuracy, the latter may have an incentive to game the algorithm into making favorable but erroneous decisions by merely changing their observable features without affecting their true labels. While prior works tend to focus on how to design decision rules robust to such strategic maneuvering, this study focuses on an alternative, which is to design incentive mechanisms to shape the utilities of the agents and induce improvement actions that genuinely improve their skills and true labels and thus, in turn, benefit both parties in the Stackelberg game. Specifically, the principal and the mechanism provider (could be the principal itself) move together in the first stage, publishing and committing to a classifier and an incentive mechanism. The agents are second movers and best respond to the published classifier and incentive mechanism. We study how the mechanism can induce improvement actions, positively impact a number of social well-being metrics, such as the overall skill levels of the agents (efficiency) and positive or true positive rate differences between different demographic groups (fairness).

Chat is not available.