arXiv Analytics

Sign in

arXiv:1611.01639 [cs.LG]AbstractReferencesReviewsResources

Representation of uncertainty in deep neural networks through sampling

Patrick McClure, Nikolaus Kriegeskorte

Published 2016-11-05Version 1

As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Scalable Bayesian DNNs that use dropout-based variational distributions have recently been proposed. Here we evaluate the ability of Bayesian DNNs trained with Bernoulli or Gaussian distributions over units (dropout) or weights (dropconnect) to represent their own uncertainty at the time of inference through sampling. We tested how well Bayesian fully connected and convolutional DNNs represented their own uncertainty in classifying the MNIST handwritten digits. By adding different levels of Gaussian noise to the test images, we assessed how DNNs represented their uncertainty about regions of input space not covered by the training set. Bayesian DNNs estimated their own uncertainty more accurately than traditional DNNs with a softmax output. These results are important for building better deep learning systems and for investigating the hypothesis that biological neural networks use sampling to represent uncertainty.

Related articles: Most relevant | Search more
arXiv:1611.06455 [cs.LG] (Published 2016-11-20)
Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline
arXiv:1605.04639 [cs.LG] (Published 2016-05-16)
Alternating optimization method based on nonnegative matrix factorizations for deep neural networks
arXiv:1601.00917 [cs.LG] (Published 2016-01-05)
Distilling Reverse-Mode Automatic Differentiation (DrMAD) for Optimizing Hyperparameters of Deep Neural Networks