Differentially Private Markov Chain Monte Carlo

Mikko Heikkilä, Joonas Jälkö, Onur Dikmen, Antti Honkela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

14 Citations (Scopus)

Abstract

Recent developments in differentially private (DP) machine learning and DP Bayesian learning have enabled learning under strong privacy guarantees for the training data subjects. In this paper, we further extend the applicability of DP Bayesian learning by presenting the first general DP Markov chain Monte Carlo (MCMC) algorithm whose privacy-guarantees are not subject to unrealistic assumptions on Markov chain convergence and that is applicable to posterior inference in arbitrary models. Our algorithm is based on a decomposition of the Barker acceptance test that allows evaluating the Rényi DP privacy cost of the accept-reject choice. We further show how to improve the DP guarantee through data subsampling and approximate acceptance tests.
Original languageEnglish
Title of host publication33rd Conference on Neural Information Processing Systems
Subtitle of host publicationNeurIPS 2019
PublisherNeural Information Processing Systems Foundation
Number of pages11
Publication statusPublished - 2019
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019
Conference number: 33
https://neurips.cc

Publication series

NameAdvances in Neural Information Processing Systems
PublisherNeural Information Processing Systems Foundation
Volume32
ISSN (Electronic)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryCanada
CityVancouver
Period08/12/201914/12/2019
Internet address

Fingerprint

Dive into the research topics of 'Differentially Private Markov Chain Monte Carlo'. Together they form a unique fingerprint.

Cite this