Abstract
Recently, machine learning (ML) has introduced advanced solutions to many domains. Since ML models provide business advantage to model owners, protecting intellectual property of ML models has emerged as an important consideration. Confidentiality of ML models can be protected by exposing them to clients only via prediction APIs. However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API. In this work, we question whether model extraction is a serious threat to complex, real-life ML models. We evaluate the current state-of-the-art model extraction attack (Knockoff nets) against complex models. We reproduce and confirm the results in the original paper. But we also show that the performance of this attack can be limited by several factors, including ML model architecture and the granularity of API response. Furthermore, we introduce a defense based on distinguishing queries used for Knockoff nets from benign queries. Despite the limitations of the Knockoff nets, we show that a more realistic adversary can effectively steal complex ML models and evade known defenses.
Original language | English |
---|---|
Title of host publication | Engineering Dependable and Secure Machine Learning Systems |
Subtitle of host publication | Third International Workshop, EDSMLS 2020, New York City, NY, USA, February 7, 2020, Revised Selected Papers |
Editors | Onn Shehory, Eitan Farchi, Guy Barash |
Publisher | Springer |
Pages | 42-57 |
Number of pages | 16 |
ISBN (Electronic) | 978-3-030-62144-5 |
ISBN (Print) | 978-3-030-62143-8 |
DOIs | |
Publication status | Published - 7 Nov 2020 |
MoE publication type | A4 Conference publication |
Event | The AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems - Hilton, Midtown, New York, United States Duration: 7 Feb 2020 → 7 Feb 2020 Conference number: 20 https://sites.google.com/view/edsmls2020/home |
Publication series
Name | Communications in Computer and Information Science |
---|---|
Publisher | Springer |
Volume | 1272 |
ISSN (Print) | 1865-0929 |
ISSN (Electronic) | 1865-0937 |
Workshop
Workshop | The AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems |
---|---|
Abbreviated title | EDSMLS |
Country/Territory | United States |
City | New York |
Period | 07/02/2020 → 07/02/2020 |
Internet address |
Keywords
- Deep Learning
- model stealing
- machine learning