arXiv Analytics

Sign in

arXiv:1906.06822 [cs.CV]AbstractReferencesReviewsResources

Spatio-Temporal Fusion Networks for Action Recognition

Sangwoo Cho, Hassan Foroosh

Published 2019-06-17Version 1

The video based CNN works have focused on effective ways to fuse appearance and motion networks, but they typically lack utilizing temporal information over video frames. In this work, we present a novel spatio-temporal fusion network (STFN) that integrates temporal dynamics of appearance and motion information from entire videos. The captured temporal dynamic information is then aggregated for a better video level representation and learned via end-to-end training. The spatio-temporal fusion network consists of two set of Residual Inception blocks that extract temporal dynamics and a fusion connection for appearance and motion features. The benefits of STFN are: (a) it captures local and global temporal dynamics of complementary data to learn video-wide information; and (b) it is applicable to any network for video classification to boost performance. We explore a variety of design choices for STFN and verify how the network performance is varied with the ablation studies. We perform experiments on two challenging human activity datasets, UCF101 and HMDB51, and achieve the state-of-the-art results with the best network.

Related articles: Most relevant | Search more
arXiv:1809.03669 [cs.CV] (Published 2018-09-11)
Temporal-Spatial Mapping for Action Recognition
arXiv:1711.11248 [cs.CV] (Published 2017-11-30)
A Closer Look at Spatiotemporal Convolutions for Action Recognition
arXiv:1906.06813 [cs.CV] (Published 2019-06-17)
A Temporal Sequence Learning for Action Recognition and Prediction