arXiv Analytics

Sign in

arXiv:1604.00790 [cs.CV]AbstractReferencesReviewsResources

Image Captioning with Deep Bidirectional LSTMs

Cheng Wang, Haojin Yang, Christian Bartz, Christoph Meinel

Published 2016-04-04Version 1

This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are proposed to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale and vertical mirror are proposed to prevent overfitting in training deep models. We visualize the evolution of bidirectional LSTM internal states over time and qualitatively analyze how our models "translate" image to sentence. Our proposed models are evaluated on caption generation and image-sentence retrieval tasks with three benchmark datasets: Flickr8K, Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models achieve highly competitive performance to the state-of-the-art results on caption generation even without integrating additional mechanism (e.g. object detection, attention model etc.) and significantly outperform recent methods on retrieval task.

Related articles: Most relevant | Search more
arXiv:1912.08226 [cs.CV] (Published 2019-12-17)
M$^2$: Meshed-Memory Transformer for Image Captioning
arXiv:1708.05271 [cs.CV] (Published 2017-08-17)
Incorporating Copying Mechanism in Image Captioning for Learning Novel Objects
arXiv:2107.06912 [cs.CV] (Published 2021-07-14)
From Show to Tell: A Survey on Image Captioning