arXiv Analytics

Sign in

arXiv:2010.03753 [cs.LG]AbstractReferencesReviewsResources

Uncertainty in Neural Processes

Saeid Naderiparizi, Kenny Chiu, Benjamin Bloem-Reddy, Frank Wood

Published 2020-10-08Version 1

We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.

Related articles: Most relevant | Search more
arXiv:1810.06530 [cs.LG] (Published 2018-10-15)
Successor Uncertainties: exploration and uncertainty in temporal difference learning
arXiv:2006.10562 [cs.LG] (Published 2020-06-18)
Uncertainty in Gradient Boosting via Ensembles
arXiv:2305.10384 [cs.LG] (Published 2023-05-17)
Logit-Based Ensemble Distribution Distillation for Robust Autoregressive Sequence Uncertainties