Oblivious Neural Network Predictions via MiniONN Transformations

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.

We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.
Original languageEnglish
Title of host publicationProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
ISBN (Electronic)978-1-4503-4946-8
Publication statusPublished - 30 Oct 2017
MoE publication typeA4 Article in a conference publication
EventACM Conference on Computer and Communications Security - Dallas, United States
Duration: 30 Oct 20173 Nov 2017
Conference number: 24


ConferenceACM Conference on Computer and Communications Security
Abbreviated titleCCS
CountryUnited States


  • privacy
  • machine learning
  • neural network predictions
  • secure two-party computation

Fingerprint Dive into the research topics of 'Oblivious Neural Network Predictions via MiniONN Transformations'. Together they form a unique fingerprint.

Cite this