arXiv Analytics

Sign in

arXiv:2003.09763 [cs.CV]AbstractReferencesReviewsResources

Monocular Depth Prediction Through Continuous 3D Loss

Minghan Zhu, Maani Ghaffari, Yuanxin Zhong, Pingping Lu, Zhong Cao, Ryan M. Eustice, Huei Peng

Published 2020-03-21Version 1

This paper reports a new continuous 3D loss function for learning depth from monocular images. The dense depth prediction from a monocular image is supervised using sparse LIDAR points, exploiting available data from camera-LIDAR sensor suites during training. Currently, accurate and affordable range sensor is not available. Stereo cameras and LIDARs measure depth either inaccurately or sparsely/costly. In contrast to the current point-to-point loss evaluation approach, the proposed 3D loss treats point clouds as continuous objects; and therefore, it overcomes the lack of dense ground truth depth due to the sparsity of LIDAR measurements. Experimental evaluations show that the proposed method achieves accurate depth measurement with consistent 3D geometric structures through a monocular camera.

Related articles: Most relevant | Search more
arXiv:2102.13258 [cs.CV] (Published 2021-02-26)
Boundary-induced and scene-aggregated network for monocular depth prediction
arXiv:2103.12091 [cs.CV] (Published 2021-03-22)
Transformers Solve the Limited Receptive Field for Monocular Depth Prediction
arXiv:2011.04123 [cs.CV] (Published 2020-11-09)
Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications