Fast optimize-and-sample method for differentiable Galerkin approximations of multi-layered Gaussian process priors

Muhammad F. Emzir, Niki A. Loppi, Zheng Zhao, Syeda S. Hassan, Simo Sarkka

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Multi-layered Gaussian process (field) priors are non-Gaussian priors, which offer a capability to handle Bayesian inference on both smooth and discontinuous functions. Previously, performing Bayesian inference using these priors required the construction of a Markov chain Monte Carlo sampler. To converge to the stationary distribution, this sampling technique is computationally inefficient and hence the utility of the approach has only been demonstrated for small canonical test problems. Furthermore, in numerous Bayesian inference applications, such as Bayesian inverse problems, the uncertainty quantification of the hyper-prior layers is of less interest, since the main concern is to quantify the randomness of the process/field of interest. In this article, we propose an alternative approach, where we optimize the hyper-prior layers, while inference is performed only for the lowest layer. Specifically, we use the Galerkin approximation with automatic differentiation to accelerate optimization. We validate the proposed approach against several existing non-stationary Gaussian process methods and demonstrate that it can significantly decrease the execution time while maintaining comparable accuracy. We also apply the method to an X-ray tomography inverse problem. Due to its improved performance and robustness, this new approach opens up the possibility for applying the multi-layer Gaussian field priors to more complex problems.

Original languageEnglish
Title of host publication2022 25th International Conference on Information Fusion, FUSION 2022
PublisherIEEE
Number of pages7
ISBN (Electronic)9781737749721
DOIs
Publication statusPublished - 2022
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Information Fusion - Linkoping, Sweden
Duration: 4 Jul 20227 Jul 2022
Conference number: 25

Conference

ConferenceInternational Conference on Information Fusion
Abbreviated titleFUSION
Country/TerritorySweden
CityLinkoping
Period04/07/202207/07/2022

Keywords

  • Bayesian learning
  • Galerkin approximations
  • Gaussian Processes
  • inverse problems
  • Markov chain Monte Carlo

Fingerprint

Dive into the research topics of 'Fast optimize-and-sample method for differentiable Galerkin approximations of multi-layered Gaussian process priors'. Together they form a unique fingerprint.

Cite this