arXiv Analytics

Sign in

arXiv:1912.01001 [cs.CV]AbstractReferencesReviewsResources

View-Invariant Probabilistic Embedding for Human Pose

Jennifer J. Sun, Jiaping Zhao, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Ting Liu

Published 2019-12-02Version 1

Depictions of similar human body configurations can vary with changing viewpoints. Using only 2D information, we would like to enable vision algorithms to recognize similarity in human body poses across multiple views. This ability is useful for analyzing body movements and human behaviors in images and videos. In this paper, we propose an approach for learning a compact view-invariant embedding space from 2D joint keypoints alone, without explicitly predicting 3D poses. Since 2D poses are projected from 3D space, they have an inherent ambiguity, which is difficult to represent through a deterministic mapping. Hence, we use probabilistic embeddings to model this input uncertainty. Experimental results show that our embedding model achieves higher accuracy when retrieving similar poses across different camera views, in comparison with 2D-to-3D pose lifting models. The results also suggest that our model is able to generalize across datasets, and our embedding variance correlates with input pose ambiguity.

Related articles: Most relevant | Search more
arXiv:1812.06408 [cs.CV] (Published 2018-12-16)
Human Pose and Path Estimation from Aerial Video using Dynamic Classifier Selection
arXiv:1907.03537 [cs.CV] (Published 2019-07-08)
Linking Art through Human Poses
arXiv:1902.11026 [cs.CV] (Published 2019-02-28)
Towards Multi-pose Guided Virtual Try-on Network