Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Learning About Progress From Experts

Jake Bruce · Ankit Anand · Bogdan Mazoure · Rob Fergus

MH1-2-3-4 #98

Keywords: [ nethack ] [ exploration ] [ learning from demonstrations ] [ reinforcement learning ] [ Reinforcement Learning ]


Abstract:

Many important tasks involve some notion of long-term progress in multiple phases: e.g. to clean a shelf it must be cleared of items, cleaning products applied, and then the items placed back on the shelf. In this work, we explore the use of expert demonstrations in long-horizon tasks to learn a monotonically increasing function that summarizes progress. This function can then be used to aid agent exploration in environments with sparse rewards. As a case study we consider the NetHack environment, which requires long-term progress at a variety of scales and is far from being solved by existing approaches. In this environment, we demonstrate that by learning a model of long-term progress from expert data containing only observations, we can achieve efficient exploration in challenging sparse tasks, well beyond what is possible with current state-of-the-art approaches. We have made the curated gameplay dataset used in this work available at https://github.com/deepmind/nao_top10.

Chat is not available.