arXiv Analytics

Sign in

arXiv:2210.10914 [cs.CV]AbstractReferencesReviewsResources

Prophet Attention: Predicting Attention with Future Attention for Improved Image Captioning

Fenglin Liu, Xuewei Ma, Xuancheng Ren, Xian Wu, Wei Fan, Yuexian Zou, Xu Sun

Published 2022-10-19Version 1

Recently, attention based models have been used extensively in many sequence-to-sequence learning systems. Especially for image captioning, the attention based models are expected to ground correct image regions with proper generated words. However, for each time step in the decoding process, the attention based models usually use the hidden state of the current input to attend to the image regions. Under this setting, these attention models have a "deviated focus" problem that they calculate the attention weights based on previous words instead of the one to be generated, impairing the performance of both grounding and captioning. In this paper, we propose the Prophet Attention, similar to the form of self-supervision. In the training stage, this module utilizes the future information to calculate the "ideal" attention weights towards image regions. These calculated "ideal" weights are further used to regularize the "deviated" attention. In this manner, image regions are grounded with the correct words. The proposed Prophet Attention can be easily incorporated into existing image captioning models to improve their performance of both grounding and captioning. The experiments on the Flickr30k Entities and the MSCOCO datasets show that the proposed Prophet Attention consistently outperforms baselines in both automatic metrics and human evaluations. It is worth noticing that we set new state-of-the-arts on the two benchmark datasets and achieve the 1st place on the leaderboard of the online MSCOCO benchmark in terms of the default ranking score, i.e., CIDEr-c40.

Related articles: Most relevant | Search more
arXiv:2102.04990 [cs.CV] (Published 2021-02-09)
SG2Caps: Revisiting Scene Graphs for Image Captioning
arXiv:1604.00790 [cs.CV] (Published 2016-04-04)
Image Captioning with Deep Bidirectional LSTMs
arXiv:1708.05271 [cs.CV] (Published 2017-08-17)
Incorporating Copying Mechanism in Image Captioning for Learning Novel Objects