Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities

Continuous Scene Graph Generation for Imitation Learning of Everyday Tasks

MaĆ«lic Neau · Paulo Santos · Anne-Gwenn Bosser · Cedric Buche


Abstract:

Deploying autonomous robots that can learn new skills in quotidian environments requires strong generalization and scalability perspectives. To achieve this goal, symbolic imitation learning of human actions seems to be a direction of interest. Symbolic representations possess the advantage of being explainable, a key principle for the acceptance of autonomous robots in quotidian environments. By learning symbolic representations of skills instead of motion-related representations, one can abstract critical information to induce better generalization.In the current paradigm of imitation learning, approaches are either constrained by strict ontologies that are not scalable or deep learning approaches that cannot represent symbolic knowledge. In this work, we propose to use the latest Scene Graph Generation (SGG) models to power a new type of Continuous Scene Graph representation that can be refined and used as the internal memory of an autonomous robot. We demonstrate that this representation is effective for modeling symbolic representations of human actions end-to-end. We evaluate our approach on the task of automatic planning domain generation from observations. Results with daily life activities datasets show the potential of our approach in real-world settings.

Chat is not available.