arXiv Analytics

Sign in

arXiv:1810.04891 [cs.CV]AbstractReferencesReviewsResources

Dense Object Reconstruction from RGBD Images with Embedded Deep Shape Representations

Lan Hu, Yuchen Cao, Peng Wu, Laurent Kneip

Published 2018-10-11Version 1

Most problems involving simultaneous localization and mapping can nowadays be solved using one of two fundamentally different approaches. The traditional approach is given by a least-squares objective, which minimizes many local photometric or geometric residuals over explicitly parametrized structure and camera parameters. Unmodeled effects violating the lambertian surface assumption or geometric invariances of individual residuals are encountered through statistical averaging or the addition of robust kernels and smoothness terms. Aiming at more accurate measurement models and the inclusion of higher-order shape priors, the community more recently shifted its attention to deep end-to-end models for solving geometric localization and mapping problems. However, at test-time, these feed-forward models ignore the more traditional geometric or photometric consistency terms, thus leading to a low ability to recover fine details and potentially complete failure in corner case scenarios. With an application to dense object modeling from RGBD images, our work aims at taking the best of both worlds by embedding modern higher-order object shape priors into classical iterative residual minimization objectives. We demonstrate a general ability to improve mapping accuracy with respect to each modality alone, and present a successful application to real data.

Related articles: Most relevant | Search more
arXiv:1812.01519 [cs.CV] (Published 2018-12-04)
SurfConv: Bridging 3D and 2D Convolution for RGBD Images
arXiv:1711.01371 [cs.CV] (Published 2017-11-04)
An Iterative Co-Saliency Framework for RGBD Images
arXiv:1901.10772 [cs.CV] (Published 2019-01-30)
Human-centric light sensing and estimation from RGBD images: The invisible light switch