Embarrassingly parallel Markov Chain Monte Carlo (MCMC) exploits parallel computing to scale Bayesian inference to large datasets by using a two-step approach. First, MCMC is run in parallel on (sub)posteriors defined on data partitions. Then, a server combines local results. While efficient, this framework is very sensitive to the quality of subposterior sampling. Common sampling problems such as missing modes or misrepresentation of low-density regions are amplified – instead of being corrected – in the combination phase, leading to catastrophic failures. In this work, we propose a novel combination strategy to mitigate this issue. Our strategy, Parallel Active Inference (PAI), leverages Gaussian Process (GP) surrogate modeling and active learning. After fitting GPs to subposteriors, PAI (i) shares information between GP surrogates to cover missing modes; and (ii) uses active sampling to individually refine subposterior approximations. We validate PAI in challenging benchmarks, including heavy-tailed and multi-modal posteriors and a real-world application to computational neuroscience. Empirical results show that PAI succeeds where previous methods catastrophically fail, with a small communication overhead.
Original languageEnglish
Title of host publicationProceedings of The 25th International Conference on Artificial Intelligence and Statistics
Publication statusPublished - 2022
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Artificial Intelligence and Statistics - Valencia, Spain
Duration: 28 Mar 202230 Mar 2022
Conference number: 25

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceInternational Conference on Artificial Intelligence and Statistics
Abbreviated titleAISTATS


Dive into the research topics of 'Parallel MCMC Without Embarrassing Failures'. Together they form a unique fingerprint.

Cite this