DAWN: Dynamic Adversarial Watermarking of Neural Networks

Sebastian Szyller, Buse Gul Atli, Samuel Marchal, N. Asokan

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

133 Citations (Scopus)

Abstract

Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction theft. Unlike prior watermarking schemes, DAWN does not impose changes to the training process but operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g., 0.5%) from API clients. This set is a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence greater than 1-2-64), incurring negligible loss of prediction accuracy (0.03-0.5%).

Original languageEnglish
Title of host publicationProceedings of the 29th ACM International Conference on Multimedia, MM 2021
PublisherACM
Pages4417-4425
Number of pages9
ISBN (Electronic)978-1-4503-8651-7
DOIs
Publication statusPublished - 17 Oct 2021
MoE publication typeA4 Conference publication
EventACM International Conference on Multimedia - Virtual, Online, China
Duration: 20 Oct 202124 Oct 2021
Conference number: 29
https://2021.acmmm.org/

Conference

ConferenceACM International Conference on Multimedia
Abbreviated titleMM
Country/TerritoryChina
CityVirtual, Online
Period20/10/202124/10/2021
Internet address

Funding

Prior defenses to model extraction protect only simple models [16, 30] or prevent only specific extraction attacks [20, 42]. DAWN is a novel a approach where we assume a surrogate model can be extracted, and propose a way to identify any surrogate DNN models that have been extracted from any victim model using any extraction attack. Acknowledgements. This work was supported in part by Intel (in the context of the Private-AI Institute).

Keywords

  • deep neural network
  • ip protection
  • model extraction
  • model stealing
  • watermarking

Fingerprint

Dive into the research topics of 'DAWN: Dynamic Adversarial Watermarking of Neural Networks'. Together they form a unique fingerprint.

Cite this