arXiv Analytics

Sign in

arXiv:1707.07287 [stat.ML]AbstractReferencesReviewsResources

Learning uncertainty in regression tasks by deep neural networks

Pavel Gurevich, Hannes Stuke

Published 2017-07-23Version 1

We suggest a general approach to quantification of different types of uncertainty in regression tasks performed by deep neural networks. It is based on the simultaneous training of two neural networks with a joint loss function. One of the networks performs regression and the other quantifies the uncertainty of predictions of the first one. Unlike in many standard uncertainty quantification methods, the targets are not assumed to be sampled from an a priori given probability distribution. We analyze how the hyperparameters affect the learning process and, additionally, show that our method even allows for better predictions compared to standard neural networks without uncertainty counterparts. Finally, we show that a particular case of our approach is the mean-variance estimation given by a Gaussian network.

Related articles: Most relevant | Search more
arXiv:1805.10965 [stat.ML] (Published 2018-05-28)
Lipschitz regularity of deep neural networks: analysis and efficient estimation
arXiv:1402.1869 [stat.ML] (Published 2014-02-08, updated 2014-06-07)
On the Number of Linear Regions of Deep Neural Networks
arXiv:1712.09482 [stat.ML] (Published 2017-12-27)
Robust Loss Functions under Label Noise for Deep Neural Networks