Skip to yearly menu bar Skip to main content


Poster

$\mathrm{SO}(2)$-Equivariant Reinforcement Learning

Dian Wang · Robin Walters · Robert Platt

Keywords: [ reinforcement learning ] [ equivariance ] [ robotic manipulation ]


Abstract: Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context of $Q$-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal $Q$-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems.

Chat is not available.