SoK : Unintended Interactions among Machine Learning Defenses and Risks

Vasisht Duddu*, Sebastian Szyller, N. Asokan

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Machine learning (ML) models cannot neglect risks to security, privacy, and fairness. Several defenses have been proposed to mitigate such risks. When a defense is effective in mitigating one risk, it may correspond to increased or decreased susceptibility to other risks. Existing research lacks an effective framework to recognize and explain these unintended interactions. We present such a framework, based on the conjecture that overfitting and memorization underlie unintended interactions. We survey existing literature on unintended interactions, accommodating them within our framework. We use our framework to conjecture two previously unexplored interactions, and empirically validate them.

Original languageEnglish
Title of host publicationProceedings - 45th IEEE Symposium on Security and Privacy, SP 2024
PublisherIEEE
Pages2996-3014
Number of pages19
ISBN (Electronic)979-8-3503-3130-1
DOIs
Publication statusPublished - 2024
MoE publication typeA4 Conference publication
EventIEEE Symposium on Security and Privacy - San Francisco, United States
Duration: 20 May 202423 May 2024
Conference number: 45

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
ISSN (Print)1081-6011

Conference

ConferenceIEEE Symposium on Security and Privacy
Abbreviated titleSP
Country/TerritoryUnited States
CitySan Francisco
Period20/05/202423/05/2024

Keywords

  • Memorization
  • Overfitting
  • Systematization
  • Trustworthy Machine Learning

Fingerprint

Dive into the research topics of 'SoK : Unintended Interactions among Machine Learning Defenses and Risks'. Together they form a unique fingerprint.

Cite this