Skip to yearly menu bar Skip to main content


Poster

Visual Semantic Navigation using Scene Priors

Wei Yang · Xiaolong Wang · Ali Farhadi · Abhinav Gupta · Roozbeh Mottaghi

Great Hall BC #68

Keywords: [ graph convolution networks ] [ knowledge graph ] [ scene prior ] [ visual navigation ] [ deep reinforcement learning ]


Abstract:

How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves the performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects.

Live content is unavailable. Log in and register to view live content