Model Stealing Attacks and Defenses : Where Are We Now?

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaPosterScientificvertaisarvioitu

Abstrakti

The success of deep learning in many application domains has been nothing short of dramatic. This has brought the spotlight onto security and privacy concerns with machine learning (ML). One such concern is the threat of model theft. I will discuss work on exploring the threat of model theft, especially in the form of “model extraction attacks” — when a model is made available to customers via an inference interface, a malicious customer can use repeated queries to this interface and use the information gained to construct a surrogate model. I will also discuss possible countermeasures, focusing on deterrence mechanisms that allow for model ownership resolution (MOR) based on watermarking or fingerprinting. In particular, I will discuss the robustness of MOR schemes. I will touch on the issue of conflicts that arise when protection mechanisms for multiple different threats need to be applied simultaneously to a given ML model, using MOR techniques as a case study.

This talk is based on work done with my students and collaborators, including Buse Atli Tekgul, Jian Liu, Mika Juuti, Rui Zhang, Samuel Marchal, and Sebastian Szyller. The work was funded in part by Intel Labs in the context of the Private AI consortium.
AlkuperäiskieliEnglanti
Sivut327-327
Sivumäärä1
DOI - pysyväislinkit
TilaJulkaistu - 2023
OKM-julkaisutyyppiEi oikeutettu
TapahtumaACM Asia Conference on Computer and Communications Security - Melbourne, Austraalia
Kesto: 10 heinäk. 202314 heinäk. 2023

Conference

ConferenceACM Asia Conference on Computer and Communications Security
LyhennettäASIA CS
Maa/AlueAustraalia
KaupunkiMelbourne
Ajanjakso10/07/202314/07/2023

Sormenjälki

Sukella tutkimusaiheisiin 'Model Stealing Attacks and Defenses : Where Are We Now?'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä