Computing Tight Differential Privacy Guarantees Using FFT

Antti Koskela, Joonas Jälkö, Antti Honkela

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review


Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using (epsilon, delta)-DP. In this paper, we propose a numerical accountant for evaluating the privacy loss for algorithms with continuous one dimensional output. This accountant can be applied to the subsampled multidimensional Gaussian mechanism which underlies the popular DP stochastic gradient descent. The proposed method is based on a numerical approximation of an integral formula which gives the exact (epsilon, delta)-values. The approximation is carried out by discretising the integral and by evaluating discrete convolutions using the fast Fourier transform algorithm. We give both theoretical error bounds and numerical error estimates for the approximation. Experimental comparisons with state-of-the-art techniques demonstrate significant improvements in bound tightness and/or computation time.

Original languageEnglish
Title of host publicationProceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics
EditorsS Chiappa, R Calandra
Number of pages9
Publication statusPublished - 2020
MoE publication typeA4 Conference publication
EventInternational Conference on Artificial Intelligence and Statistics - Palermo, Italy
Duration: 3 Jun 20205 Jun 2020
Conference number: 23

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


ConferenceInternational Conference on Artificial Intelligence and Statistics
Abbreviated titleAISTATS


Dive into the research topics of 'Computing Tight Differential Privacy Guarantees Using FFT'. Together they form a unique fingerprint.

Cite this