Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

ContraDiff: Planning Towards High Return States via Contrastive Learning

Yixiang Shan · Zhengbang Zhu · Ting Long · Qifan Liang · Yi Chang · Weinan Zhang · Liang Yin

Hall 3 + Hall 2B #369
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

The performance of offline reinforcement learning (RL) is sensitive to the proportion of high-return trajectories in the offline dataset. However, in many simulation environments and real-world scenarios, there are large ratios of low-return trajectories rather than high-return trajectories, which makes learning an efficient policy challenging. In this paper, we propose a method called Contrastive Diffuser (ContraDiff) to make full use of low-return trajectories and improve the performance of offline RL algorithms. Specifically, ContraDiff groups the states of trajectories in the offline dataset into high-return states and low-return states and treats them as positive and negative samples correspondingly. Then, it designs a contrastive mechanism to pull the planned trajectory of an agent toward high-return states and push them away from low-return states. Through the contrast mechanism, trajectories with low returns can serve as negative examples for policy learning, guiding the agent to avoid areas associated with low returns and achieve better performance. Through the contrast mechanism, trajectories with low returns provide a counteracting force'' guides the agent to avoid areas associated with low returns and achieve better performance.Experiments on 27 sub-optimal datasets demonstrate the effectiveness of our proposed method. Our code is publicly available at https://github.com/Looomo/contradiff.

Live content is unavailable. Log in and register to view live content