Skip to yearly menu bar Skip to main content


Score Regularized Policy Optimization through Diffusion Behavior

Huayu Chen · Cheng Lu · Zhengyi Wang · Hang Su · Jun Zhu

Halle B #296
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT


Recent developments in offline reinforcement learning have uncovered the immense potential of diffusion modeling, which excels at representing heterogeneous behavior policies. However, sampling from diffusion policies is considerably slow because it necessitates tens to hundreds of iterative inference steps for one action. To address this issue, we propose to extract an efficient deterministic inference policy from critic models and pretrained diffusion behavior models, leveraging the latter to directly regularize the policy gradient with the behaviordistribution’s score function during optimization. Our method enjoys powerful generative capabilities of diffusion modeling while completely circumventing the computationally intensive and time-consuming diffusion sampling scheme, both during training and evaluation. Extensive results on D4RL tasks show that our method boosts action sampling speed by more than 25 times compared with various leading diffusion-based methods in locomotion tasks, while still maintaining state-of-the-art performance.

Chat is not available.