arXiv Analytics

Sign in

arXiv:1907.10659 [cs.CV]AbstractReferencesReviewsResources

SDNet: Semantically Guided Depth Estimation Network

Matthias Ochs, Adrian Kretz, Rudolf Mester

Published 2019-07-24Version 1

Autonomous vehicles and robots require a full scene understanding of the environment to interact with it. Such a perception typically incorporates pixel-wise knowledge of the depths and semantic labels for each image from a video sensor. Recent learning-based methods estimate both types of information independently using two separate CNNs. In this paper, we propose a model that is able to predict both outputs simultaneously, which leads to improved results and even reduced computational costs compared to independent estimation of depth and semantics. We also empirically prove that the CNN is capable of learning more meaningful and semantically richer features. Furthermore, our SDNet estimates the depth based on ordinal classification. On the basis of these two enhancements, our proposed method achieves state-of-the-art results in semantic segmentation and depth estimation from single monocular input images on two challenging datasets.

Comments: Paper is accepted at German Conference on Pattern Recognition (GCPR), Dortmund, Germany, September 2019
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2406.08866 [cs.CV] (Published 2024-06-13)
Zoom and Shift are All You Need
arXiv:1705.01352 [cs.CV] (Published 2017-05-03)
Optical Flow in Mostly Rigid Scenes
arXiv:1902.07304 [cs.CV] (Published 2019-02-19)
DeepBall: Deep Neural-Network Ball Detector