Poster

Dropout Q-Functions for Doubly Efficient Reinforcement Learning

Takuya Hiraoka · Takahisa Imagawa · Taisei Hashimoto · Takashi Onishi · Yoshimasa Tsuruoka

Virtual

Keywords: [ reinforcement learning ]

[ Abstract ]
[ Visit Poster at Spot E0 in Virtual World ] [ Slides [ OpenReview
Mon 25 Apr 2:30 a.m. PDT — 4:30 a.m. PDT

Abstract:

Randomized ensembled double Q-learning (REDQ) (Chen et al., 2021b) has recently achieved state-of-the-art sample efficiency on continuous-action reinforcement learning benchmarks. This superior sample efficiency is made possible by using a large Q-function ensemble. However, REDQ is much less computationally efficient than non-ensemble counterparts such as Soft Actor-Critic (SAC) (Haarnoja et al., 2018a). To make REDQ more computationally efficient, we propose a method of improving computational efficiency called DroQ, which is a variant of REDQ that uses a small ensemble of dropout Q-functions. Our dropout Q-functions are simple Q-functions equipped with dropout connection and layer normalization. Despite its simplicity of implementation, our experimental results indicate that DroQ is doubly (sample and computationally) efficient. It achieved comparable sample efficiency with REDQ, much better computational efficiency than REDQ, and comparable computational efficiency with that of SAC.

Chat is not available.