arXiv Analytics

Sign in

arXiv:2205.04061 [cs.CV]AbstractReferencesReviewsResources

Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering

Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

Published 2022-05-09Version 1

Video question answering (VideoQA) is challenging given its multimodal combination of visual understanding and natural language processing. While most existing approaches ignore the visual appearance-motion information at different temporal scales, it is unknown how to incorporate the multilevel processing capacity of a deep learning model with such multiscale information. Targeting these issues, this paper proposes a novel Multilevel Hierarchical Network (MHN) with multiscale sampling for VideoQA. MHN comprises two modules, namely Recurrent Multimodal Interaction (RMI) and Parallel Visual Reasoning (PVR). With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations. Thereon, with a shared transformer encoder, PVR infers the visual cues at each level in parallel to fit with answering different question types that may rely on the visual information at relevant levels. Through extensive experiments on three VideoQA datasets, we demonstrate improved performances than previous state-of-the-arts and justify the effectiveness of each part of our method.

Comments: Accepted by IJCAI 2022. arXiv admin note: text overlap with arXiv:2109.04735
Categories: cs.CV, cs.AI
Related articles: Most relevant | Search more
arXiv:1904.11574 [cs.CV] (Published 2019-04-25)
TVQA+: Spatio-Temporal Grounding for Video Question Answering
arXiv:2406.18538 [cs.CV] (Published 2024-05-17)
VideoQA-SC: Adaptive Semantic Communication for Video Question Answering
arXiv:2308.03267 [cs.CV] (Published 2023-08-07)
Redundancy-aware Transformer for Video Question Answering