Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Equivariant Descriptor Fields: SE(3)-Equivariant Energy-Based Models for End-to-End Visual Robotic Manipulation Learning

Hyunwoo Ryu · Hong-in Lee · Jeong-Hoon Lee · Jongeun Choi

MH1-2-3-4 #46

Keywords: [ learning from demonstration ] [ Unseen object ] [ Sample efficient ] [ Equivariant robotics ] [ SO(3) ] [ SE(3) ] [ Roto-translation equivariance ] [ Category-level manipulation ] [ Applications ] [ graph neural networks ] [ equivariance ] [ robotics ] [ manipulation ] [ imitation learning ] [ mcmc ] [ end-to-end ] [ point clouds ] [ lie group ] [ few shot ] [ Energy-based model ] [ Langevin dynamics ] [ robotic manipulation ] [ Representation Theory ]


Abstract:

End-to-end learning for visual robotic manipulation is known to suffer from sample inefficiency, requiring large numbers of demonstrations. The spatial roto-translation equivariance, or the SE(3)-equivariance can be exploited to improve the sample efficiency for learning robotic manipulation. In this paper, we present SE(3)-equivariant models for visual robotic manipulation from point clouds that can be trained fully end-to-end. By utilizing the representation theory of the Lie group, we construct novel SE(3)-equivariant energy-based models that allow highly sample efficient end-to-end learning. We show that our models can learn from scratch without prior knowledge and yet are highly sample efficient (5~10 demonstrations are enough). Furthermore, we show that our models can generalize to tasks with (i) previously unseen target object poses, (ii) previously unseen target object instances of the category, and (iii) previously unseen visual distractors. We experiment with 6-DoF robotic manipulation tasks to validate our models' sample efficiency and generalizability. Codes are available at: https://github.com/tomato1mule/edf

Chat is not available.