IFR-Explore: Learning Inter-object Functional Relationships in 3D Indoor Scenes

QI LI · Kaichun Mo · Yanchao Yang · Hang Zhao · Leonidas Guibas

Keywords: [ 3D scene understanding ]

[ Abstract ]
[ Visit Poster at Spot D1 in Virtual World ] [ OpenReview
Thu 28 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


Building embodied intelligent agents that can interact with 3D indoor environments has received increasing research attention in recent years. While most works focus on single-object or agent-object visual functionality and affordances, our work proposes to study a novel, underexplored, kind of visual relations that is also important to perceive and model -- inter-object functional relationships (e.g., a switch on the wall turns on or off the light, a remote control operates the TV). Humans often spend no effort or only a little to infer these relationships, even when entering a new room, by using our strong prior knowledge (e.g., we know that buttons control electrical devices) or using only a few exploratory interactions in cases of uncertainty (e.g., multiple switches and lights in the same room). In this paper, we take the first step in building AI system learning inter-object functional relationships in 3D indoor environments with key technical contributions of modeling prior knowledge by training over large-scale scenes and designing interactive policies for effectively exploring the training scenes and quickly adapting to novel test scenes. We create a new dataset based on the AI2Thor and PartNet datasets and perform extensive experiments that prove the effectiveness of our proposed method.

Chat is not available.