Privacy-preserving data sharing via probabilistic modeling

Joonas Jälkö*, Eemil Lagerspetz, Jari Haukka, Sasu Tarkoma, Antti Honkela, Samuel Kaski

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

6 Downloads (Pure)

Abstract

Differential privacy allows quantifying privacy loss resulting from accession of sensitive personal data. Repeated accesses to underlying data incur increasing loss. Releasing data as privacy-preserving synthetic data would avoid this limitation but would leave open the problem of designing what kind of synthetic data. We propose formulating the problem of private data release through probabilistic modeling. This approach transforms the problem of designing the synthetic data into choosing a model for the data, allowing also the inclusion of prior knowledge, which improves the quality of the synthetic data. We demonstrate empirically, in an epidemiological study, that statistical discoveries can be reliably reproduced from the synthetic data. We expect the method to have broad use in creating high-quality anonymized data twins of key datasets for research.

Original languageEnglish
Article number100271
Number of pages10
JournalPatterns
Volume2
Issue number7
DOIs
Publication statusPublished - 9 Jul 2021
MoE publication typeA1 Journal article-refereed

Keywords

  • differential privacy
  • DSML 2: Proof-of-Concept: Data science output has been formulated, implemented, and tested for one domain/problem
  • machine learning
  • open data
  • probabilistic modeling
  • synthetic data

Fingerprint

Dive into the research topics of 'Privacy-preserving data sharing via probabilistic modeling'. Together they form a unique fingerprint.

Cite this