arXiv Analytics

Sign in

arXiv:1904.11574 [cs.CV]AbstractReferencesReviewsResources

TVQA+: Spatio-Temporal Grounding for Video Question Answering

Jie Lei, Licheng Yu, Tamara L. Berg, Mohit Bansal

Published 2019-04-25Version 1

We present the task of Spatio-Temporal Video Question Answering, which requires intelligent systems to simultaneously retrieve relevant moments and detect referenced visual concepts (people and objects) to answer natural language questions about videos. We first augment the TVQA dataset with 310.8k bounding boxes, linking depicted objects to visual concepts in questions and answers. We name this augmented version as TVQA+. We then propose Spatio-Temporal Answerer with Grounded Evidence (STAGE), a unified framework that grounds evidence in both the spatial and temporal domains to answer questions about videos. Comprehensive experiments and analyses demonstrate the effectiveness of our framework and how the rich annotations in our TVQA+ dataset can contribute to the question answering task. As a side product, by performing this joint task, our model is able to produce more insightful intermediate results. Dataset and code are publicly available.

Related articles: Most relevant | Search more
arXiv:2203.01225 [cs.CV] (Published 2022-03-02)
Video Question Answering: Datasets, Algorithms and Challenges
arXiv:2205.04061 [cs.CV] (Published 2022-05-09)
Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering
arXiv:2210.03941 [cs.CV] (Published 2022-10-08)
Learning Fine-Grained Visual Understanding for Video Question Answering via Decoupling Spatial-Temporal Modeling