Oblivious Neural Network Predictions via MiniONN Transformations

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Details

Original languageEnglish
Title of host publicationProceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
PublisherACM
Pages619-631
ISBN (Electronic)978-1-4503-4946-8
StatePublished - 30 Oct 2017
MoE publication typeA4 Article in a conference publication
EventACM Conference on Computer and Communications Security - Dallas, United States
Duration: 30 Oct 20173 Nov 2017
Conference number: 24

Conference

ConferenceACM Conference on Computer and Communications Security
Abbreviated titleCCS
CountryUnited States
CityDallas
Period30/10/201703/11/2017

Researchers

Research units

Abstract

Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.

We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.

    Research areas

  • privacy, machine learning, neural network predictions, secure two-party computation

ID: 16370791