Poster
Neural Probabilistic Motor Primitives for Humanoid Control
Josh Merel · Leonard Hasenclever · Alexandre Galashov · Arun Ahuja · Vu Pham · Greg Wayne · Yee Whye Teh · Nicolas Heess
Great Hall BC #63
Keywords: [ reinforcement learning ] [ distillation ] [ continuous control ] [ motor primitives ] [ humanoid control ] [ motion capture ] [ one-shot imitation ]
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids. To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck. We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space. The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories. Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic. To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning. We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results.
Live content is unavailable. Log in and register to view live content