arXiv Analytics

Sign in

arXiv:2211.01053 [cs.LG]AbstractReferencesReviewsResources

Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

Paul E. Chang, Prakhar Verma, ST John, Victor Picheny, Henry Moss, Arno Solin

Published 2022-11-02Version 1

Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning. Their drawbacks are poor scaling with data and the need to run an optimization loop when using a non-Gaussian likelihood. In this paper, we focus on `fantasizing' batch acquisition functions that need the ability to condition on new fantasized data computationally efficiently. By using a sparse Dual GP parameterization, we gain linear scaling with batch size as well as one-step updates for non-Gaussian likelihoods, thus extending sparse models to greedy batch fantasizing acquisition functions.

Comments: In the 2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2004.09557 [cs.LG] (Published 2020-04-20)
ALPS: Active Learning via Perturbations
arXiv:1602.07265 [cs.LG] (Published 2016-02-23)
Search Improves Label for Active Learning
arXiv:1309.6875 [cs.LG] (Published 2013-09-26)
Active Learning with Expert Advice