arXiv Analytics

Sign in

arXiv:2006.04569 [cs.CV]AbstractReferencesReviewsResources

Person Re-identification in the 3D Space

Zhedong Zheng, Yi Yang

Published 2020-06-08Version 1

People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel Omni-scale Graph Network (OG-Net) to learn the representation from sparse 3D points. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. Extensive experiments show that the proposed method achieves competitive results on three popular large-scale person re-id datasets, and has good scalability to unseen datasets.

Comments: The code is available at https://github.com/layumi/person-reid-3d
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1910.02527 [cs.CV] (Published 2019-10-06)
3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera
arXiv:2206.11895 [cs.CV] (Published 2022-06-23)
Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space
arXiv:1605.06240 [cs.CV] (Published 2016-05-20)
FPNN: Field Probing Neural Networks for 3D Data