arXiv:1801.10304 [cs.CV]AbstractReferencesReviewsResources
Action Recognition with Visual Attention on Skeleton Images
Zhengyuan Yang, Yuncheng Li, Jianchao Yang, Jiebo Luo
Published 2018-01-31Version 1
Action recognition with 3D skeleton sequences is becoming popular due to its speed and robustness. The recently proposed Convolutional Neural Networks (CNN) based methods have shown good performance in learning spatio-temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN based methods, there exist two problems that potentially limit the performance. First, previous skeleton representations are generated by chaining joints with a fixed order. The corresponding semantic meaning is unclear and the structural information among the joints is lost. Second, previous models do not have an ability to focus on informative joints. The attention mechanism is important for skeleton based action recognition because there exist spatio-temporal key stages and the joint predictions can be inaccurate. To solve the two problems, we propose a novel CNN based method for skeleton based action recognition. We first redesign the skeleton representations with a depth-first tree traversal order, which enhances the semantic meaning of skeleton images and better preserves the structural information. We then propose the idea of a two-branch attention architecture that focuses on spatio-temporal key stages and filters out unreliable joint predictions. A base attention model with the simplest structure is first introduced to illustrate the two-branch attention architecture. By improving the structures in both branches, we further propose a Global Long-sequence Attention Network (GLAN). Experiment results on the NTU RGB+D dataset and the SBU Kinetic Interaction dataset show that our proposed approach outperforms the state-of-the-art, as well as the effectiveness of each component.