All You Need Is "Love": Evading Hate Speech Detection

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Researchers

Research units

  • University of Padua

Abstract

With the spread of social networks and their unfortunate use for hate speech, automatic detection of the latter has become a pressing problem. In this paper, we reproduce seven state-of-the-art hate speech detection models from prior work, and show that they perform well only when tested on the same type of data they were trained on. Based on these results, we argue that for successful hate speech detection, model architecture is less important than the type of data and labeling criteria. We further show that all proposed detection techniques are brittle against adversaries who can (automatically) insert typos, change word boundaries or add innocuous words to the original hate speech. A combination of these methods is also effective against Google Perspective - a cutting-edge solution from industry. Our experiments demonstrate that adversarial training does not completely mitigate the attacks, and using character-level features makes the models systematically more attack-resistant than using word-level features.

Details

Original languageEnglish
Title of host publicationProceedings of the 11th ACM Workshop on Artificial Intelligence and Security
Publication statusPublished - 2018
MoE publication typeA4 Article in a conference publication
EventACM Workshop on Artificial Intelligence and Security - Toronto, Canada
Duration: 19 Oct 201819 Oct 2018
Conference number: 11

Workshop

WorkshopACM Workshop on Artificial Intelligence and Security
Abbreviated titleAISec
CountryCanada
CityToronto
Period19/10/201819/10/2018

Download statistics

No data available

ID: 30926075