arXiv Analytics

Sign in

arXiv:2102.04990 [cs.CV]AbstractReferencesReviewsResources

SG2Caps: Revisiting Scene Graphs for Image Captioning

Subarna Tripathi, Kien Nguyen, Tanaya Guha, Bang Du, Truong Q. Nguyen

Published 2021-02-09Version 1

The mainstream image captioning models rely on Convolutional Neural Network (CNN) image features with an additional attention to salient regions and objects to generate captions via recurrent models. Recently, scene graph representations of images have been used to augment captioning models so as to leverage their structural semantics, such as object entities, relationships and attributes. Several studies have noted that naive use of scene graphs from a black-box scene graph generator harms image caption-ing performance, and scene graph-based captioning mod-els have to incur the overhead of explicit use of image features to generate decent captions. Addressing these challenges, we propose a framework, SG2Caps, that utilizes only the scene graph labels for competitive image caption-ing performance. The basic idea is to close the semantic gap between two scene graphs - one derived from the input image and the other one from its caption. In order to achieve this, we leverage the spatial location of objects and the Human-Object-Interaction (HOI) labels as an additional HOI graph. Our framework outperforms existing scene graph-only captioning models by a large margin (CIDEr score of 110 vs 71) indicating scene graphs as a promising representation for image captioning. Direct utilization of the scene graph labels avoids expensive graph convolutions over high-dimensional CNN features resulting in 49%fewer trainable parameters.

Related articles: Most relevant | Search more
arXiv:1903.12020 [cs.CV] (Published 2019-03-28)
Describing like humans: on diversity in image captioning
arXiv:1604.00790 [cs.CV] (Published 2016-04-04)
Image Captioning with Deep Bidirectional LSTMs
arXiv:2107.06912 [cs.CV] (Published 2021-07-14)
From Show to Tell: A Survey on Image Captioning