Differentially Private Bayesian Inference for Generalized Linear Models

Tejas Kulkarni, Joonas Jälkö, Antti Koskela, Samuel Kaski, Antti Honkela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

34 Downloads (Pure)


Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets. A large body of prior works that investigate GLMs under differential privacy (DP) constraints provide only private point estimates of the regression coefficients, and are not able to quantify parameter uncertainty.

In this work, with logistic and Poisson regression as running examples, we introduce a generic noise-aware DP Bayesian inference method for a GLM at hand, given a noisy sum of summary statistics. Quantifying uncertainty allows us to determine which of the regression coefficients are statistically significantly different from zero. We provide a tight privacy analysis and experimentally demonstrate that the posteriors obtained from our model, while adhering to strong privacy guarantees, are close to the non-private posteriors.

Original languageEnglish
Title of host publicationProceedings of the 38th International Conference on Machine Learning
EditorsM Meila, T Zhang
Number of pages12
Publication statusPublished - 2021
MoE publication typeA4 Conference publication
EventInternational Conference on Machine Learning - Virtual, Online
Duration: 18 Jul 202124 Jul 2021
Conference number: 38

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceInternational Conference on Machine Learning
Abbreviated titleICML
CityVirtual, Online


Dive into the research topics of 'Differentially Private Bayesian Inference for Generalized Linear Models'. Together they form a unique fingerprint.

Cite this