Making targeted black-box evasion attacks effective and efficient

Research output: Contribution to conferencePaperScientificpeer-review

Standard

Making targeted black-box evasion attacks effective and efficient. / Juuti, Mika; Atli, Buse; Asokan, N.

2019. Paper presented at ACM Workshop on Artificial Intelligence and Security, London, United Kingdom.

Research output: Contribution to conferencePaperScientificpeer-review

Harvard

Juuti, M, Atli, B & Asokan, N 2019, 'Making targeted black-box evasion attacks effective and efficient' Paper presented at ACM Workshop on Artificial Intelligence and Security, London, United Kingdom, 15/11/2019 - 15/11/2019, .

APA

Juuti, M., Atli, B., & Asokan, N. (Accepted/In press). Making targeted black-box evasion attacks effective and efficient. Paper presented at ACM Workshop on Artificial Intelligence and Security, London, United Kingdom.

Vancouver

Juuti M, Atli B, Asokan N. Making targeted black-box evasion attacks effective and efficient. 2019. Paper presented at ACM Workshop on Artificial Intelligence and Security, London, United Kingdom.

Author

Juuti, Mika ; Atli, Buse ; Asokan, N. / Making targeted black-box evasion attacks effective and efficient. Paper presented at ACM Workshop on Artificial Intelligence and Security, London, United Kingdom.12 p.

Bibtex - Download

@conference{787ec13468014c69b8e81a8bb13c94c4,
title = "Making targeted black-box evasion attacks effective and efficient",
abstract = "We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting. We formalize the problem setting and systematically evaluate what benefits the adversary can gain by using substitute models. We show that there is an exploration-exploitation tradeoff in that query efficiency comes at the cost of effectiveness. We present two new attack strategies for using substitute models and show that they are as effective as previous query-only techniques but require significantly fewer queries, by up to three orders of magnitude. We also show that an agile adversary capable of switching through different attack techniques can achieve pareto-optimal efficiency. We demonstrate our attack against Google Cloud Vision showing that the difficulty of black-box attacks against real-world prediction APIs is significantly easier than previously thought (requiring approximately 500 queries instead of approximately 20,000 as in previous works).",
keywords = "adversarial examples, Neural Networks",
author = "Mika Juuti and Buse Atli and N. Asokan",
year = "2019",
month = "8",
day = "12",
language = "English",
note = "ACM Workshop on Artificial Intelligence and Security, AISec ; Conference date: 15-11-2019 Through 15-11-2019",
url = "https://aisec.cc/",

}

RIS - Download

TY - CONF

T1 - Making targeted black-box evasion attacks effective and efficient

AU - Juuti, Mika

AU - Atli, Buse

AU - Asokan, N.

PY - 2019/8/12

Y1 - 2019/8/12

N2 - We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting. We formalize the problem setting and systematically evaluate what benefits the adversary can gain by using substitute models. We show that there is an exploration-exploitation tradeoff in that query efficiency comes at the cost of effectiveness. We present two new attack strategies for using substitute models and show that they are as effective as previous query-only techniques but require significantly fewer queries, by up to three orders of magnitude. We also show that an agile adversary capable of switching through different attack techniques can achieve pareto-optimal efficiency. We demonstrate our attack against Google Cloud Vision showing that the difficulty of black-box attacks against real-world prediction APIs is significantly easier than previously thought (requiring approximately 500 queries instead of approximately 20,000 as in previous works).

AB - We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting. We formalize the problem setting and systematically evaluate what benefits the adversary can gain by using substitute models. We show that there is an exploration-exploitation tradeoff in that query efficiency comes at the cost of effectiveness. We present two new attack strategies for using substitute models and show that they are as effective as previous query-only techniques but require significantly fewer queries, by up to three orders of magnitude. We also show that an agile adversary capable of switching through different attack techniques can achieve pareto-optimal efficiency. We demonstrate our attack against Google Cloud Vision showing that the difficulty of black-box attacks against real-world prediction APIs is significantly easier than previously thought (requiring approximately 500 queries instead of approximately 20,000 as in previous works).

KW - adversarial examples

KW - Neural Networks

M3 - Paper

ER -

ID: 36220015