Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Hyperbolic Deep Reinforcement Learning

Edoardo Cetin · Benjamin Chamberlain · Michael Bronstein · Jonathan J Hunt

MH1-2-3-4 #45

Keywords: [ representation learning ] [ machine learning ] [ hyperbolic space ] [ reinforcement learning ] [ Reinforcement Learning ]


Abstract:

In deep reinforcement learning (RL), useful information about the state is inherently tied to its possible future successors. Consequently, encoding features that capture the hierarchical relationships between states into the model's latent representations is often conducive to recovering effective policies. In this work, we study a new class of deep RL algorithms that promote encoding such relationships by using hyperbolic space to model latent representations. However, we find that a naive application of existing methodology from the hyperbolic deep learning literature leads to fatal instabilities due to the non-stationarity and variance characterizing common gradient estimators in RL. Hence, we design a new general method that directly addresses such optimization challenges and enables stable end-to-end learning with deep hyperbolic representations. We empirically validate our framework by applying it to popular on-policy and off-policy RL algorithms on the Procgen and Atari 100K benchmarks, attaining near universal performance and generalization benefits. Given its natural fit, we hope this work will inspire future RL research to consider hyperbolic representations as a standard tool.

Chat is not available.