arXiv:1611.01639 [cs.LG]AbstractReferencesReviewsResources
Representation of uncertainty in deep neural networks through sampling
Patrick McClure, Nikolaus Kriegeskorte
Published 2016-11-05Version 1
As deep neural networks (DNNs) are applied to increasingly challenging problems, they will need to be able to represent their own uncertainty. Modeling uncertainty is one of the key features of Bayesian methods. Scalable Bayesian DNNs that use dropout-based variational distributions have recently been proposed. Here we evaluate the ability of Bayesian DNNs trained with Bernoulli or Gaussian distributions over units (dropout) or weights (dropconnect) to represent their own uncertainty at the time of inference through sampling. We tested how well Bayesian fully connected and convolutional DNNs represented their own uncertainty in classifying the MNIST handwritten digits. By adding different levels of Gaussian noise to the test images, we assessed how DNNs represented their uncertainty about regions of input space not covered by the training set. Bayesian DNNs estimated their own uncertainty more accurately than traditional DNNs with a softmax output. These results are important for building better deep learning systems and for investigating the hypothesis that biological neural networks use sampling to represent uncertainty.