PRADA: Protecting Against DNN Model Stealing Attacks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Standard

PRADA: Protecting Against DNN Model Stealing Attacks. / Juuti, M.; Szyller, S.; Marchal, S.; Asokan, N.

IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019. IEEE, 2019. p. 512-527.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Harvard

Juuti, M, Szyller, S, Marchal, S & Asokan, N 2019, PRADA: Protecting Against DNN Model Stealing Attacks. in IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019. IEEE, pp. 512-527, IEEE European Symposium on Security and Privacy, Stockholm, Sweden, 17/06/2019. https://doi.org/10.1109/EuroSP.2019.00044

APA

Juuti, M., Szyller, S., Marchal, S., & Asokan, N. (2019). PRADA: Protecting Against DNN Model Stealing Attacks. In IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019 (pp. 512-527). IEEE. https://doi.org/10.1109/EuroSP.2019.00044

Vancouver

Juuti M, Szyller S, Marchal S, Asokan N. PRADA: Protecting Against DNN Model Stealing Attacks. In IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019. IEEE. 2019. p. 512-527 https://doi.org/10.1109/EuroSP.2019.00044

Author

Juuti, M. ; Szyller, S. ; Marchal, S. ; Asokan, N. / PRADA: Protecting Against DNN Model Stealing Attacks. IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019. IEEE, 2019. pp. 512-527

Bibtex - Download

@inproceedings{e4bb2228472f4c959cec2e3f26b1f7ed,
title = "PRADA: Protecting Against DNN Model Stealing Attacks",
abstract = "Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.",
keywords = "Predictive models, Computational modeling, Training, Mathematical model, Data mining, Business, Neural networks, Adversarial machine learning, model extraction, model stealing, deep neural network",
author = "M. Juuti and S. Szyller and S. Marchal and N. Asokan",
year = "2019",
doi = "10.1109/EuroSP.2019.00044",
language = "English",
pages = "512--527",
booktitle = "IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019",
publisher = "IEEE",
address = "United States",

}

RIS - Download

TY - GEN

T1 - PRADA: Protecting Against DNN Model Stealing Attacks

AU - Juuti, M.

AU - Szyller, S.

AU - Marchal, S.

AU - Asokan, N.

PY - 2019

Y1 - 2019

N2 - Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.

AB - Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.

KW - Predictive models

KW - Computational modeling

KW - Training

KW - Mathematical model

KW - Data mining

KW - Business

KW - Neural networks

KW - Adversarial machine learning

KW - model extraction

KW - model stealing

KW - deep neural network

U2 - 10.1109/EuroSP.2019.00044

DO - 10.1109/EuroSP.2019.00044

M3 - Conference contribution

SP - 512

EP - 527

BT - IEEE European Symposium on Security and Privacy, EuroS&P 2019, Stockholm, Sweden, June 17-19, 2019

PB - IEEE

ER -

ID: 36899545