arXiv Analytics

Sign in

arXiv:1902.04213 [cs.CV]AbstractReferencesReviewsResources

You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding

Chaorui Deng, Qi Wu, Guanghui Xu, Zhuliang Yu, Yanwu Xu, Kui Jia, Mingkui Tan

Published 2019-02-12Version 1

Visual Grounding (VG) aims to locate the most relevant region in an image, based on a flexible natural language query but not a pre-defined label, thus it can be a more useful technique than object detection in practice. Most state-of-the-art methods in VG operate in a two-stage manner, wherein the first stage an object detector is adopted to generate a set of object proposals from the input image and the second stage is simply formulated as a cross-modal matching problem that finds the best match between the language query and all region proposals. This is rather inefficient because there might be hundreds of proposals produced in the first stage that need to be compared in the second stage, not to mention this strategy performs inaccurately. In this paper, we propose an simple, intuitive and much more elegant one-stage detection based method that joints the region proposal and matching stage as a single detection network. The detection is conditioned on the input query with a stack of novel Relation-to-Attention modules that transform the image-to-query relationship to an relation map, which is used to predict the bounding box directly without proposing large numbers of useless region proposals. During the inference, our approach is about 20x ~ 30x faster than previous methods and, remarkably, it achieves 18% ~ 41% absolute performance improvement on top of the state-of-the-art results on several benchmark datasets. We release our code and all the pre-trained models at https://github.com/openblack/rvg.

Comments: 10 pages, 5 figures, submitted to CVPR 2019
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2003.02065 [cs.CV] (Published 2020-03-04)
Mixup Regularization for Region Proposal based Object Detectors
arXiv:2211.06588 [cs.CV] (Published 2022-11-12)
DEYO: DETR with YOLO for Step-by-Step Object Detection
arXiv:2005.10550 [cs.CV] (Published 2020-05-21)
Region Proposals for Saliency Map Refinement for Weakly-supervised Disease Localisation and Classification