Skip to yearly menu bar Skip to main content


Poster

Meta-Imitation Learning by Watching Video Demonstrations

Jiayi Li · Tao Lu · Xiaoge Cao · Yinghao Cai · Shuo Wang

Keywords: [ one-shot learning ] [ generative adversarial networks ]


Abstract:

Meta-Imitation Learning is a promising technique for the robot to learn a new task from observing one or a few human demonstrations. However, it usually requires a significant number of demonstrations both from humans and robots during the meta-training phase, which is a laborious and hard work for data collection, especially in recording the actions and specifying the correspondence between human and robot. In this work, we present an approach of meta-imitation learning by watching video demonstrations from humans. In comparison to prior works, our approach is able to translate human videos into practical robot demonstrations and train the meta-policy with adaptive loss based on the quality of the translated data. Our approach relies only on human videos and does not require robot demonstration, which facilitates data collection and is more in line with human imitation behavior. Experiments reveal that our method achieves the comparable performance to the baseline on fast learning a set of vision-based tasks through watching a single video demonstration.

Chat is not available.