Skip to yearly menu bar Skip to main content


Poster

Offline Reinforcement Learning with Value-based Episodic Memory

Xiaoteng Ma · Yiqin Yang · Hao Hu · Jun Yang · Chongjie Zhang · Qianchuan Zhao · Bin Liang · Qihan Liu

Keywords: [ reinforcement learning ]


Abstract:

Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V-function instead of the Q-function to naturally keep the learning procedure within the support of an offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V-Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V-values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks.

Chat is not available.