arXiv Analytics

Sign in

arXiv:1706.08495 [stat.ML]AbstractReferencesReviewsResources

Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft

Published 2017-06-26Version 1

Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data. We describe and study in these models a decomposition of predictive uncertainty into its epistemic and aleatoric components. First, we show how such a decomposition arises naturally in a Bayesian active learning scenario by following an information theoretic approach. Second, we use a similar decomposition to develop a novel risk sensitive objective for safe reinforcement learning (RL). This objective minimizes the effect of model bias in environments whose stochastic dynamics are described by BNNs with latent variables. Our experiments illustrate the usefulness of the resulting decomposition in active learning and safe RL settings.

Related articles: Most relevant | Search more
arXiv:2008.08044 [stat.ML] (Published 2020-08-18)
Bayesian neural networks and dimensionality reduction
arXiv:1502.05336 [stat.ML] (Published 2015-02-18)
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
arXiv:1902.02603 [stat.ML] (Published 2019-02-07)
Radial and Directional Posteriors for Bayesian Neural Networks