arXiv Analytics

Sign in

arXiv:2206.09341 [cs.LG]AbstractReferencesReviewsResources

Bayesian Optimization under Stochastic Delayed Feedback

Arun Verma, Zhongxiang Dai, Bryan Kian Hsiang Low

Published 2022-06-19Version 1

Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization of complex and expensive-to-compute black-box functions. The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay. Such assumptions may not be practical in many real-life problems like online recommendations, clinical trials, and hyperparameter tuning where feedback is available after a random delay. To benefit from the experimental parallelization in these problems, the learner needs to start new function evaluations without waiting for delayed feedback. In this paper, we consider the BO under stochastic delayed feedback problem. We propose algorithms with sub-linear regret guarantees that efficiently address the dilemma of selecting new function queries while waiting for randomly delayed feedback. Building on our results, we also make novel contributions to batch BO and contextual Gaussian process bandits. Experiments on synthetic and real-life datasets verify the performance of our algorithms.

Related articles: Most relevant | Search more
arXiv:2305.08624 [cs.LG] (Published 2023-05-15)
Mastering the exploration-exploitation trade-off in Bayesian Optimization
arXiv:2210.05977 [cs.LG] (Published 2022-10-12)
BORA: Bayesian Optimization for Resource Allocation
arXiv:1611.07343 [cs.LG] (Published 2016-11-22)
Limbo: A Fast and Flexible Library for Bayesian Optimization