39 Downloads (Pure)


Bayesian optimization (BO) is a powerful framework for optimizing black-box, expensive-to-evaluate functions. Over the past decade, many algorithms have been proposed to integrate cheaper, lower-fidelity approximations of the objective function into the optimization process, with the goal of converging towards the global optimum at a reduced cost. This task is generally referred to as multi-fidelity Bayesian optimization (MFBO). However, MFBO algorithms can lead to higher optimization costs than their vanilla BO counterparts, especially when the low-fidelity sources are poor approximations of the objective function, therefore defeating their purpose. To address this issue, we propose rMFBO (robust MFBO), a methodology to make any GP-based MFBO scheme robust to the addition of unreliable information sources. rMFBO comes with a theoretical guarantee that its performance can be bound to its vanilla BO analog, with high controllable probability. We demonstrate the effectiveness of the proposed methodology on a number of numerical benchmarks, outperforming earlier MFBO methods on unreliable sources. We expect rMFBO to be particularly useful to reliably include human experts with varying knowledge within BO processes.
Original languageEnglish
Title of host publicationProceedings of the 26th International Conference on Artificial Intelligence and Statistics
Publication statusPublished - 25 Apr 2023
MoE publication typeA4 Conference publication
EventInternational Conference on Artificial Intelligence and Statistics - Valencia, Spain
Duration: 25 Apr 202327 Apr 2023
Conference number: 26

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceInternational Conference on Artificial Intelligence and Statistics
Abbreviated titleAISTATS
Internet address


Dive into the research topics of 'Multi-Fidelity Bayesian Optimization with Unreliable Information Sources'. Together they form a unique fingerprint.

Cite this