Abstract
Machine learning (ML) models cannot neglect risks to security, privacy, and fairness. Several defenses have been proposed to mitigate such risks. When a defense is effective in mitigating one risk, it may correspond to increased or decreased susceptibility to other risks. Existing research lacks an effective framework to recognize and explain these unintended interactions. We present such a framework, based on the conjecture that overfitting and memorization underlie unintended interactions. We survey existing literature on unintended interactions, accommodating them within our framework. We use our framework to conjecture two previously unexplored interactions, and empirically validate them.
Original language | English |
---|---|
Title of host publication | Proceedings - 45th IEEE Symposium on Security and Privacy, SP 2024 |
Publisher | IEEE |
Pages | 2996-3014 |
Number of pages | 19 |
ISBN (Electronic) | 979-8-3503-3130-1 |
DOIs | |
Publication status | Published - 2024 |
MoE publication type | A4 Conference publication |
Event | IEEE Symposium on Security and Privacy - San Francisco, United States Duration: 20 May 2024 → 23 May 2024 Conference number: 45 |
Publication series
Name | Proceedings - IEEE Symposium on Security and Privacy |
---|---|
ISSN (Print) | 1081-6011 |
Conference
Conference | IEEE Symposium on Security and Privacy |
---|---|
Abbreviated title | SP |
Country/Territory | United States |
City | San Francisco |
Period | 20/05/2024 → 23/05/2024 |
Keywords
- Memorization
- Overfitting
- Systematization
- Trustworthy Machine Learning