Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry

Dian Wang · Jung Yeon Park · Neel Sortur · Lawson Wong · Robin Walters · Robert Platt

MH1-2-3-4 #102

Keywords: [ robotics ] [ Equivariant Learning ] [ reinforcement learning ] [ Reinforcement Learning ]


Abstract:

Extensive work has demonstrated that equivariant neural networks can significantly improve sample efficiency and generalization by enforcing an inductive bias in the network architecture. These applications typically assume that the domain symmetry is fully described by explicit transformations of the model inputs and outputs. However, many real-life applications contain only latent or partial symmetries which cannot be easily described by simple transformations of the input. In these cases, it is necessary to learn symmetry in the environment instead of imposing it mathematically on the network architecture. We discover, surprisingly, that imposing equivariance constraints that do not exactly match the domain symmetry is very helpful in learning the true symmetry in the environment. We differentiate between extrinsic and incorrect symmetry constraints and show that while imposing incorrect symmetry can impede the model's performance, imposing extrinsic symmetry can actually improve performance. We demonstrate that an equivariant model can significantly outperform non-equivariant methods on domains with latent symmetries both in supervised learning and in reinforcement learning for robotic manipulation and control problems.

Chat is not available.