Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling

Huayu Chen · Cheng Lu · Chengyang Ying · Hang Su · Jun Zhu

MH1-2-3-4 #121

Keywords: [ behavior modeling ] [ offline reinforcement learning ] [ generative models ] [ Diffusion Models ] [ Reinforcement Learning ]


Abstract:

In offline reinforcement learning, weighted regression is a common method to ensure the learned policy stays close to the behavior policy and to prevent selecting out-of-sample actions. In this work, we show that due to the limited distributional expressivity of policy models, previous methods might still select unseen actions during training, which deviates from their initial motivation. To address this problem, we adopt a generative approach by decoupling the learned policy into two parts: an expressive generative behavior model and an action evaluation model. The key insight is that such decoupling avoids learning an explicitly parameterized policy model with a closed-form expression. Directly learning the behavior policy allows us to leverage existing advances in generative modeling, such as diffusion-based methods, to model diverse behaviors. As for action evaluation, we combine our method with an in-sample planning technique to further avoid selecting out-of-sample actions and increase computational efficiency. Experimental results on D4RL datasets show that our proposed method achieves competitive or superior performance compared with state-of-the-art offline RL methods, especially in complex tasks such as AntMaze. We also empirically demonstrate that our method can successfully learn from a heterogeneous dataset containing multiple distinctive but similarly successful strategies, whereas previous unimodal policies fail.

Chat is not available.