Differentially Private Markov Chain Monte Carlo

Mikko Heikkilä, Joonas Jälkö, Onur Dikmen, Antti Honkela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

14 Citations (Scopus)


Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our algorithm is based on a decomposition of the Barker acceptance test that allows evaluating the Rényi DP privacy cost of the accept-reject choice. We further show how to improve the DP guarantee through data subsampling and approximate acceptance tests.
Original languageEnglish
Title of host publication33rd Conference on Neural Information Processing Systems
Subtitle of host publicationNeurIPS 2019
PublisherNeural Information Processing Systems Foundation
Number of pages11
Publication statusPublished - 2019
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
Conference number: 33

Publication series

NameAdvances in Neural Information Processing Systems
PublisherNeural Information Processing Systems Foundation
ISSN (Electronic)1049-5258


ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Internet address


Dive into the research topics of 'Differentially Private Markov Chain Monte Carlo'. Together they form a unique fingerprint.

Cite this