Autonomous Learning of Object-Centric Abstractions for High-Level Planning

Steven James · Benjamin Rosman · George D Konidaris

Keywords: [ reinforcement learning ] [ transfer ] [ multitask ] [ planning ]

[ Abstract ]
[ Visit Poster at Spot A1 in Virtual World ] [ OpenReview
Tue 26 Apr 2:30 a.m. PDT — 4:30 a.m. PDT


We propose a method for autonomously learning an object-centric representation of a continuous and high-dimensional environment that is suitable for planning. Such representations can immediately be transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task. We first demonstrate our approach on a 2D crafting domain consisting of numerous objects where the agent learns a compact, lifted representation that generalises across objects. We then apply it to a series of Minecraft tasks to learn object-centric representations and object types - directly from pixel data - that can be leveraged to solve new tasks quickly. The resulting learned representations enable the use of a task-level planner, resulting in an agent capable of transferring learned representations to form complex, long-term plans.

Chat is not available.