Skip to yearly menu bar Skip to main content


Poster

Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching

Pierre-Alexandre Kamienny · Jean Tarbouriech · sylvain lamprier · Alessandro Lazaric · Ludovic Denoyer

Keywords: [ skill discovery ] [ Unsupervised reinforcement learning ] [ mutual information ]


Abstract:

Learning meaningful behaviors in the absence of reward is a difficult problem in reinforcement learning. A desirable and challenging unsupervised objective is to learn a set of diverse skills that provide a thorough coverage of the state space while being directed, i.e., reliably reaching distinct regions of the environment. In this paper, we build on the mutual information framework for skill discovery and introduce UPSIDE, which addresses the coverage-directedness trade-off in the following ways: 1) We design policies with a decoupled structure of a directed skill, trained to reach a specific region, followed by a diffusing part that induces a local coverage. 2) We optimize policies by maximizing their number under the constraint that each of them reaches distinct regions of the environment (i.e., they are sufficiently discriminable) and prove that this serves as a lower bound to the original mutual information objective. 3) Finally, we compose the learned directed skills into a growing tree that adaptively covers the environment. We illustrate in several navigation and control environments how the skills learned by UPSIDE solve sparse-reward downstream tasks better than existing baselines.

Chat is not available.