Extraction of Complex DNN Models: Real Threat or Boogeyman?

Buse Gul Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal, N. Asokan

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

25 Citations (Scopus)
1 Downloads (Pure)

Abstract

Recently, machine learning (ML) has introduced advanced solutions to many domains. Since ML models provide business advantage to model owners, protecting intellectual property of ML models has emerged as an important consideration. Confidentiality of ML models can be protected by exposing them to clients only via prediction APIs. However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API. In this work, we question whether model extraction is a serious threat to complex, real-life ML models. We evaluate the current state-of-the-art model extraction attack (Knockoff nets) against complex models. We reproduce and confirm the results in the original paper. But we also show that the performance of this attack can be limited by several factors, including ML model architecture and the granularity of API response. Furthermore, we introduce a defense based on distinguishing queries used for Knockoff nets from benign queries. Despite the limitations of the Knockoff nets, we show that a more realistic adversary can effectively steal complex ML models and evade known defenses.
Original languageEnglish
Title of host publicationEngineering Dependable and Secure Machine Learning Systems
Subtitle of host publicationThird International Workshop, EDSMLS 2020, New York City, NY, USA, February 7, 2020, Revised Selected Papers
EditorsOnn Shehory, Eitan Farchi, Guy Barash
PublisherSpringer
Pages42-57
Number of pages16
ISBN (Electronic)978-3-030-62144-5
ISBN (Print)978-3-030-62143-8
DOIs
Publication statusPublished - 7 Nov 2020
MoE publication typeA4 Conference publication
EventThe AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems - Hilton, Midtown, New York, United States
Duration: 7 Feb 20207 Feb 2020
Conference number: 20
https://sites.google.com/view/edsmls2020/home

Publication series

Name Communications in Computer and Information Science
PublisherSpringer
Volume1272
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Workshop

WorkshopThe AAAI-20 Workshop on Engineering Dependable and Secure Machine Learning Systems
Abbreviated titleEDSMLS
Country/TerritoryUnited States
CityNew York
Period07/02/202007/02/2020
Internet address

Keywords

  • Deep Learning
  • model stealing
  • machine learning

Fingerprint

Dive into the research topics of 'Extraction of Complex DNN Models: Real Threat or Boogeyman?'. Together they form a unique fingerprint.

Cite this