arXiv Analytics

Sign in

arXiv:1706.02690 [cs.LG]AbstractReferencesReviewsResources

Principled Detection of Out-of-Distribution Examples in Neural Networks

Shiyu Liang, Yixuan Li, R. Srikant

Published 2017-06-08Version 1

We consider the problem of detecting out-of-distribution examples in neural networks. We propose ODIN, a simple and effective out-of-distribution detector for neural networks, that does not require any change to a pre-trained model. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in- and out-of-distribution samples, allowing for more effective detection. We show in a series of experiments that our approach is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach[1] by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%. We theoretically analyze the method and prove that performance improvement is guaranteed under mild conditions on the image distributions.

Related articles: Most relevant | Search more
arXiv:1807.04225 [cs.LG] (Published 2018-07-11)
Measuring abstract reasoning in neural networks
arXiv:1805.09370 [cs.LG] (Published 2018-05-23)
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
arXiv:1805.07405 [cs.LG] (Published 2018-05-18)
Processing of missing data by neural networks