Skip to yearly menu bar Skip to main content


Poster

Near-Optimal Representation Learning for Hierarchical Reinforcement Learning

Ofir Nachum · Shixiang Gu · Honglak Lee · Sergey Levine

Great Hall BC #2

Keywords: [ representation hierarchy reinforcement learning ]


Abstract:

We study the problem of representation learning in goal-conditioned hierarchical reinforcement learning. In such hierarchical structures, a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach. Accordingly, the choice of representation -- the mapping of observation space to goal space -- is crucial. To study this problem, we develop a notion of sub-optimality of a representation, defined in terms of expected reward of the optimal hierarchical policy using this representation. We derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice. Results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies, compared to existing methods.

Live content is unavailable. Log in and register to view live content