Poster
Generalized Principal-Agent Problem with a Learning Agent
Tao Lin · Yiling Chen
Hall 3 + Hall 2B #413
[
Abstract
]
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Generalized principal-agent problems, including Stackelberg games, contract design, and Bayesian persuasion, are a class of economic problems where an agent best responds to a principal's committed strategy. We study repeated generalized principal-agent problems under the assumption that the principal does not have commitment power and the agent uses algorithms to learn to respond to the principal. We reduce this problem to a one-shot generalized principal-agent problem where the agent approximately best responds. Using this reduction, we show that: (1) if the agent uses contextual no-regret learning algorithms with regret Reg(T)Reg(T), then the principal can guarantee utility at least U∗−Θ(√Reg(T)T)U∗−Θ(√Reg(T)T), where U∗U∗ is the principal's optimal utility in the classic model with a best-responding agent.(2) If the agent uses contextual no-swap-regret learning algorithms with swap-regret SReg(T)SReg(T), then the principal cannot obtain utility more than U∗+O(SReg(T)T)U∗+O(SReg(T)T). But (3) if the agent uses mean-based learning algorithms (which can be no-regret but not no-swap-regret), then the principal can sometimes do significantly better than U∗U∗.These results not only refine previous results in Stackelberg games and contract design, but also lead to new results for Bayesian persuasion with a learning agent and all generalized principal-agent problems where the agent does not have private information.
Live content is unavailable. Log in and register to view live content