Fantasizing with Dual GPs in Bayesian Optimization and Active Learning

Paul Chang*, Prakhar Verma, Ti John, Victor Picheny, Henry Moss, Arno Solin

*Corresponding author for this work

Research output: Contribution to conferencePaperScientificpeer-review

Abstract

Gaussian processes (GPs) are the main surrogate functions used for sequential modelling such as Bayesian Optimization and Active Learning. Their drawbacks are poor scaling with data and the need to run an optimization loop when using a non-Gaussian likelihood. In this paper, we focus on `fantasizing' batch acquisition functions that need the ability to condition on new fantasized data computationally efficiently. By using a sparse Dual GP parameterization, we gain linear scaling with batch size as well as one-step updates for non-Gaussian likelihoods, thus extending sparse models to greedy batch fantasizing acquisition functions.
Original languageEnglish
Publication statusPublished - 2022
MoE publication typeNot Eligible
EventConference on Neural Information Processing Systems - New Orleans, United States
Duration: 28 Nov 20229 Dec 2022
Conference number: 36
https://nips.cc/

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryUnited States
CityNew Orleans
Period28/11/202209/12/2022
Internet address

Fingerprint

Dive into the research topics of 'Fantasizing with Dual GPs in Bayesian Optimization and Active Learning'. Together they form a unique fingerprint.

Cite this