arXiv Analytics

Sign in

arXiv:2006.04569 [cs.CV]AbstractReferencesReviewsResources

Person Re-identification in the 3D Space

Zhedong Zheng, Yi Yang

Published 2020-06-08Version 1

People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel Omni-scale Graph Network (OG-Net) to learn the representation from sparse 3D points. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. Extensive experiments show that the proposed method achieves competitive results on three popular large-scale person re-id datasets, and has good scalability to unseen datasets.

Comments: The code is available at https://github.com/layumi/person-reid-3d
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2404.04319 [cs.CV] (Published 2024-04-05)
SpatialTracker: Tracking Any 2D Pixels in 3D Space
arXiv:2408.07416 [cs.CV] (Published 2024-08-14)
Rethinking Open-Vocabulary Segmentation of Radiance Fields in 3D Space
arXiv:1910.02527 [cs.CV] (Published 2019-10-06)
3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera