{ "id": "2006.04569", "version": "v1", "published": "2020-06-08T13:20:33.000Z", "updated": "2020-06-08T13:20:33.000Z", "title": "Person Re-identification in the 3D Space", "authors": [ "Zhedong Zheng", "Yi Yang" ], "comment": "The code is available at https://github.com/layumi/person-reid-3d", "categories": [ "cs.CV" ], "abstract": "People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel Omni-scale Graph Network (OG-Net) to learn the representation from sparse 3D points. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. Extensive experiments show that the proposed method achieves competitive results on three popular large-scale person re-id datasets, and has good scalability to unseen datasets.", "revisions": [ { "version": "v1", "updated": "2020-06-08T13:20:33.000Z" } ], "analyses": { "keywords": [ "3d space", "popular large-scale person re-id datasets", "novel omni-scale graph network", "deep re-id feature free", "3d body structure" ], "tags": [ "github project" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }