Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased

Chao Yu · Jiaxuan Gao · Weilin Liu · Botian Xu · Hao Tang · Jiaqi Yang · Yu Wang · Yi Wu

MH1-2-3-4 #125

Keywords: [ Reinforcement Learning ] [ multi-agent reinforcement learning ] [ human-AI collaboration ]


There is a recent trend of applying multi-agent reinforcement learning (MARL) to train an agent that can cooperate with humans in a zero-shot fashion without using any human data. The typical workflow is to first repeatedly run self-play (SP) to build a policy pool and then train the final adaptive policy against this pool. A crucial limitation of this framework is that every policy in the pool is optimized w.r.t. the environment reward function, which implicitly assumes that the testing partners of the adaptive policy will be precisely optimizing the same reward function as well. However, human objectives are often substantially biased according to their own preferences, which can differ greatly from the environment reward. We propose a more general framework, Hidden-Utility Self-Play (HSP), which explicitly models human biases as hidden reward functions in the self-play objective. By approximating the reward space as linear functions, HSP adopts an effective technique to generate an augmented policy pool with biased policies. We evaluate HSP on the Overcooked benchmark. Empirical results show that our HSP method produces higher rewards than baselines when cooperating with learned human models, manually scripted policies, and real humans. The HSP policy is also rated as the most assistive policy based on human feedback.

Chat is not available.