{ "id": "1809.03669", "version": "v1", "published": "2018-09-11T03:29:28.000Z", "updated": "2018-09-11T03:29:28.000Z", "title": "Temporal-Spatial Mapping for Action Recognition", "authors": [ "Xiaolin Song", "Cuiling Lan", "Wenjun Zeng", "Junliang Xing", "Jingyu Yang", "Xiaoyan Sun" ], "categories": [ "cs.CV" ], "abstract": "Deep learning models have enjoyed great success for image related computer vision tasks like image classification and object detection. For video related tasks like human action recognition, however, the advancements are not as significant yet. The main challenge is the lack of effective and efficient models in modeling the rich temporal spatial information in a video. We introduce a simple yet effective operation, termed Temporal-Spatial Mapping (TSM), for capturing the temporal evolution of the frames by jointly analyzing all the frames of a video. We propose a video level 2D feature representation by transforming the convolutional features of all frames to a 2D feature map, referred to as VideoMap. With each row being the vectorized feature representation of a frame, the temporal-spatial features are compactly represented, while the temporal dynamic evolution is also well embedded. Based on the VideoMap representation, we further propose a temporal attention model within a shallow convolutional neural network to efficiently exploit the temporal-spatial dynamics. The experiment results show that the proposed scheme achieves the state-of-the-art performance, with 4.2% accuracy gain over Temporal Segment Network (TSN), a competing baseline method, on the challenging human action benchmark dataset HMDB51.", "revisions": [ { "version": "v1", "updated": "2018-09-11T03:29:28.000Z" } ], "analyses": { "keywords": [ "action recognition", "temporal-spatial mapping", "related computer vision tasks", "human action benchmark dataset hmdb51", "video level 2d feature representation" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }