arXiv Analytics

Sign in

arXiv:1810.06543 [cs.CV]AbstractReferencesReviewsResources

Visual Semantic Navigation using Scene Priors

Wei Yang, Xiaolong Wang, Ali Farhadi, Abhinav Gupta, Roozbeh Mottaghi

Published 2018-10-15Version 1

How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects. The supplementary video can be accessed at the following link: https://youtu.be/otKjuO805dE .

Related articles: Most relevant | Search more
arXiv:2012.04512 [cs.CV] (Published 2020-12-08)
SSCNav: Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation
arXiv:1910.00324 [cs.CV] (Published 2019-10-01)
Graph convolutional networks for learning with few clean and many noisy labels
arXiv:1706.05206 [cs.CV] (Published 2017-06-16)
Dynamic Filters in Graph Convolutional Networks