Skip to yearly menu bar Skip to main content


Workshop

Regret Minimization for Partially Observable Deep Reinforcement Learning

Peter Jin · Sergey Levine · Kurt Keutzer

East Meeting Level 8 + 15 #24

Thu 3 May, 11 a.m. PDT

Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial or non-Markovian observations by using finite-length frame-history observations or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to a cumulative clipped advantage function and is robust to partially observed state. We demonstrate that on several partially observed reinforcement learning tasks, this new class of algorithms can substantially outperform strong baseline methods: on Pong with single-frame observations, and on the challenging Doom (ViZDoom) and Minecraft (Malmö) first-person navigation benchmarks.

Live content is unavailable. Log in and register to view live content