arXiv Analytics

Sign in

arXiv:1705.01253 [cs.CV]AbstractReferencesReviewsResources

The Forgettable-Watcher Model for Video Question Answering

Hongyang Xue, Zhou Zhao, Deng Cai

Published 2017-05-03Version 1

A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.

Related articles: Most relevant | Search more
arXiv:1909.02218 [cs.CV] (Published 2019-09-05)
A Better Way to Attend: Attention with Trees for Video Question Answering
arXiv:2104.03762 [cs.CV] (Published 2021-04-08)
Video Question Answering with Phrases via Semantic Roles
arXiv:2210.03941 [cs.CV] (Published 2022-10-08)
Learning Fine-Grained Visual Understanding for Video Question Answering via Decoupling Spatial-Temporal Modeling