arXiv Analytics

Sign in

arXiv:2007.15649 [cs.CV]AbstractReferencesReviewsResources

Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild

Jason Y. Zhang, Sam Pepose, Hanbyul Joo, Deva Ramanan, Jitendra Malik, Angjoo Kanazawa

Published 2020-07-30Version 1

We present a method that infers spatial arrangements and shapes of humans and objects in a globally consistent 3D scene, all from a single image in-the-wild captured in an uncontrolled environment. Notably, our method runs on datasets without any scene- or object-level 3D supervision. Our key insight is that considering humans and objects jointly gives rise to "3D common sense" constraints that can be used to resolve ambiguity. In particular, we introduce a scale loss that learns the distribution of object size from data; an occlusion-aware silhouette re-projection loss to optimize object pose; and a human-object interaction loss to capture the spatial layout of objects with which humans interact. We empirically validate that our constraints dramatically reduce the space of likely 3D spatial configurations. We demonstrate our approach on challenging, in-the-wild images of humans interacting with large objects (such as bicycles, motorcycles, and surfboards) and handheld objects (such as laptops, tennis rackets, and skateboards). We quantify the ability of our approach to recover human-object arrangements and outline remaining challenges in this relatively domain. The project webpage can be found at https://jasonyzhang.com/phosa.

Related articles: Most relevant | Search more
arXiv:1908.07117 [cs.CV] (Published 2019-08-20)
360-Degree Textures of People in Clothing from a Single Image
arXiv:1406.2282 [cs.CV] (Published 2014-06-09)
Robust Estimation of 3D Human Poses from a Single Image
arXiv:1903.06473 [cs.CV] (Published 2019-03-15)
DeepHuman: 3D Human Reconstruction from a Single Image