arXiv Analytics

Sign in

arXiv:1306.3874 [cs.CV]AbstractReferencesReviewsResources

Classifying and Visualizing Motion Capture Sequences using Deep Neural Networks

Kyunghyun Cho, Xi Chen

Published 2013-06-17, updated 2014-09-01Version 2

The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features, and the experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.

Related articles: Most relevant | Search more
arXiv:1605.08153 [cs.CV] (Published 2016-05-26)
DeepMovie: Using Optical Flow and Deep Neural Networks to Stylize Movies
arXiv:1709.03820 [cs.CV] (Published 2017-09-12)
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
arXiv:1703.07715 [cs.CV] (Published 2017-03-22)
Classifying Symmetrical Differences and Temporal Change in Mammography Using Deep Neural Networks