Skip to yearly menu bar Skip to main content


The Human-AI Substitution game: active learning from a strategic labeler

Tom Yan · Chicheng Zhang

Halle B #174
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT


The standard active learning setting assumes a willing labeler, who provides labels on informative examples to speed up learning. However, if the labeler wishes to be compensated for as many labels as possible before learning finishes, the labeler may benefit from actually slowing down learning. This incentive arises for instance if the labeler is to be replaced by the ML model once it is trained. In this paper, we initiate the study of learning from a strategic labeler, who may abstain from labeling to slow down learning. We first prove that strategic abstention can prolong learning, and propose a novel complexity measure and representation to analyze the query complexity of the learning game. Next, we develop a near-optimal deterministic algorithm, prove its robustness to strategic labeling, and contrast it with other active learning algorithms. We also analyze extensions that encompass more general learning goals and labeler assumptions. Finally, we characterize the query cost of multi-task active learning, with and without abstention. Our first exploration of strategic labeling aims to consolidate our theoretical understanding of the \emph{imitative} nature of ML in human-AI interaction.

Chat is not available.