Skip to yearly menu bar Skip to main content


Poster

Diverse Policies Recovering via Pointwise Mutual Information Weighted Imitation Learning

Hanlin Yang · Jian Yao · Weiming Liu · Qing Wang · Hanmin Qin · Kong hansheng · Kirk Tang · Jiechao Xiong · Chao Yu · Kai Li · Junliang Xing · Hongwu Chen · Juchao Zhuo · QIANG FU · Yang Wei · Haobo Fu

Hall 3 + Hall 2B #472
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Recovering a spectrum of diverse policies from a set of expert trajectories is an important research topic in imitation learning. After determining a latent style for a trajectory, previous diverse polices recovering methods usually employ a vanilla behavioral cloning learning objective conditioned on the latent style, treating each state-action pair in the trajectory with equal importance. Based on an observation that in many scenarios, behavioral styles are often highly relevant with only a subset of state-action pairs, this paper presents a new principled method in diverse polices recovering. In particular, after inferring or assigning a latent style for a trajectory, we enhance the vanilla behavioral cloning by incorporating a weighting mechanism based on pointwise mutual information.This additional weighting reflects the significance of each state-action pair's contribution to learning the style, thus allowing our method to focus on state-action pairs most representative of that style.We provide theoretical justifications for our new objective, and extensive empirical evaluations confirm the effectiveness of our method in recovering diverse polices from expert data.

Live content is unavailable. Log in and register to view live content