Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Equivariant Descriptor Fields: SE(3)-Equivariant Energy-Based Models for End-to-End Visual Robotic Manipulation Learning

Hyunwoo Ryu · Hong-in Lee · Jeong-Hoon Lee · Jongeun Choi

MH1-2-3-4 #46

Keywords: [ Applications ] [ few shot ] [ robotic manipulation ] [ point clouds ] [ manipulation ] [ learning from demonstration ] [ robotics ] [ Representation Theory ] [ mcmc ] [ equivariance ] [ imitation learning ] [ end-to-end ] [ Langevin dynamics ] [ graph neural networks ] [ Energy-based model ] [ lie group ] [ Category-level manipulation ] [ Roto-translation equivariance ] [ SE(3) ] [ SO(3) ] [ Equivariant robotics ] [ Sample efficient ] [ Unseen object ]


Abstract:

End-to-end learning for visual robotic manipulation is known to suffer from sample inefficiency, requiring large numbers of demonstrations. The spatial roto-translation equivariance, or the SE(3)-equivariance can be exploited to improve the sample efficiency for learning robotic manipulation. In this paper, we present SE(3)-equivariant models for visual robotic manipulation from point clouds that can be trained fully end-to-end. By utilizing the representation theory of the Lie group, we construct novel SE(3)-equivariant energy-based models that allow highly sample efficient end-to-end learning. We show that our models can learn from scratch without prior knowledge and yet are highly sample efficient (5~10 demonstrations are enough). Furthermore, we show that our models can generalize to tasks with (i) previously unseen target object poses, (ii) previously unseen target object instances of the category, and (iii) previously unseen visual distractors. We experiment with 6-DoF robotic manipulation tasks to validate our models' sample efficiency and generalizability. Codes are available at: https://github.com/tomato1mule/edf

Chat is not available.