Skip to yearly menu bar Skip to main content


Poster

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

Alexandre Péré · Sébastien Forestier · Olivier Sigaud · Pierre-Yves Oudeyer

East Meeting level; 1,2,3 #24

Abstract:

Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.

Live content is unavailable. Log in and register to view live content