{ "id": "1907.10659", "version": "v1", "published": "2019-07-24T19:08:31.000Z", "updated": "2019-07-24T19:08:31.000Z", "title": "SDNet: Semantically Guided Depth Estimation Network", "authors": [ "Matthias Ochs", "Adrian Kretz", "Rudolf Mester" ], "comment": "Paper is accepted at German Conference on Pattern Recognition (GCPR), Dortmund, Germany, September 2019", "categories": [ "cs.CV" ], "abstract": "Autonomous vehicles and robots require a full scene understanding of the environment to interact with it. Such a perception typically incorporates pixel-wise knowledge of the depths and semantic labels for each image from a video sensor. Recent learning-based methods estimate both types of information independently using two separate CNNs. In this paper, we propose a model that is able to predict both outputs simultaneously, which leads to improved results and even reduced computational costs compared to independent estimation of depth and semantics. We also empirically prove that the CNN is capable of learning more meaningful and semantically richer features. Furthermore, our SDNet estimates the depth based on ordinal classification. On the basis of these two enhancements, our proposed method achieves state-of-the-art results in semantic segmentation and depth estimation from single monocular input images on two challenging datasets.", "revisions": [ { "version": "v1", "updated": "2019-07-24T19:08:31.000Z" } ], "analyses": { "keywords": [ "semantically guided depth estimation network", "method achieves state-of-the-art results", "single monocular input images", "perception typically incorporates pixel-wise knowledge" ], "tags": [ "conference paper" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }