Abstrakti
Crafting adversarial inputs for attacks on neural networks and robustification against such attacks have continued to be a topic of keen interest in the machine learning community. Yet, the vast majority of work in current literature is only empirical in nature. We present a novel viewpoint on adversarial attacks on recurrent neural networks (RNNs) through the lens of dynamical systems theory. In particular, we show how control theory-based analysis tools can be leveraged to compute these adversarial input disturbances, and obtain bounds on how they impact the neural network performance. The disturbances are computed dynamically at each time-step by taking advantage of the recurrent architecture of RNNs, thus making them more efficient compared to prior work, as well as amenable to ‘real-time’ attacks. Finally, the theoretical results are supported by some illustrative examples.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | 2020 59th IEEE Conference on Decision and Control (CDC) |
Kustantaja | IEEE |
Sivut | 4677-4682 |
Sivumäärä | 6 |
ISBN (painettu) | 978-1-7281-7448-8 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 18 jouluk. 2020 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | IEEE Conference on Decision and Control - Virtual, Online, Jeju Island, Etelä-Korea Kesto: 14 jouluk. 2020 → 18 jouluk. 2020 Konferenssinumero: 59 |
Conference
Conference | IEEE Conference on Decision and Control |
---|---|
Lyhennettä | CDC |
Maa/Alue | Etelä-Korea |
Kaupunki | Jeju Island |
Ajanjakso | 14/12/2020 → 18/12/2020 |