arXiv Analytics

Sign in

arXiv:2302.01328 [cs.CV]AbstractReferencesReviewsResources

$IC^3$: Image Captioning by Committee Consensus

David M. Chan, Austin Myers, Sudheendra Vijayanarasimhan, David A. Ross, John Canny

Published 2023-02-02Version 1

If you ask a human to describe an image, they might do so in a thousand different ways. Traditionally, image captioning models are trained to approximate the reference distribution of image captions, however, doing so encourages captions that are viewpoint-impoverished. Such captions often focus on only a subset of the possible details, while ignoring potentially useful information in the scene. In this work, we introduce a simple, yet novel, method: "Image Captioning by Committee Consensus" ($IC^3$), designed to generate a single caption that captures high-level details from several viewpoints. Notably, humans rate captions produced by $IC^3$ at least as helpful as baseline SOTA models more than two thirds of the time, and $IC^3$ captions can improve the performance of SOTA automated recall systems by up to 84%, indicating significant material improvements over existing SOTA approaches for visual description. Our code is publicly available at https://github.com/DavidMChan/caption-by-committee

Related articles: Most relevant | Search more
arXiv:2202.10492 [cs.CV] (Published 2022-02-21)
CaMEL: Mean Teacher Learning for Image Captioning
arXiv:2210.10914 [cs.CV] (Published 2022-10-19)
Prophet Attention: Predicting Attention with Future Attention for Improved Image Captioning
arXiv:1805.09137 [cs.CV] (Published 2018-05-13)
Image Captioning