Differentially Private Bayesian Learning on Distributed Data

Mikko A. Heikkilä, Eemil Lagerspetz, Samuel Kaski, Kana Shimizu, Sasu Tarkoma, Antti Honkela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

28 Citations (Scopus)


Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results. The standard DP algorithms require a single trusted party to have access to the entire data, which is a clear weakness, or add prohibitive amounts of noise. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a learning strategy based on a secure multi-party sum function for aggregating summaries from data holders and the Gaussian mechanism for DP. Our method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost.
Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 30
Subtitle of host publicationProceedings of NIPS 2017
PublisherCurran Associates Inc.
Number of pages10
Publication statusPublished - 2017
MoE publication typeA4 Conference publication
EventIEEE Conference on Neural Information Processing Systems - Long Beach, United States
Duration: 4 Dec 20179 Dec 2017
Conference number: 31

Publication series

NameAdvances in Neural Information Processing Systems
PublisherCurran Associates
ISSN (Print)1049-5258


ConferenceIEEE Conference on Neural Information Processing Systems
Abbreviated titleNIPS
Country/TerritoryUnited States
CityLong Beach


Dive into the research topics of 'Differentially Private Bayesian Learning on Distributed Data'. Together they form a unique fingerprint.

Cite this