Abstract
In a world increasingly reliant on artificial intelligence, it is more important than ever to consider the ethical implications of artificial intelligence. One key under-explored challenge is labeler bias - bias introduced by individuals who label datasets - which can create inherently biased datasets for training and subsequently lead to inaccurate or unfair decisions in healthcare, employment, education, and law enforcement. Hence, we conducted a study (N=98) to investigate and measure the existence of labeler bias using images of people from different ethnicities and sexes in a labeling task. Our results show that participants hold stereotypes that influence their decision-making process and that labeler demographics impact assigned labels. We also discuss how labeler bias influences datasets and, subsequently, the models trained on them. Overall, a high degree of transparency must be maintained throughout the entire artificial intelligence training process to identify and correct biases in the data as early as possible.
Original language | English |
---|---|
Title of host publication | HHAI 2024 |
Subtitle of host publication | Hybrid Human AI Systems for the Social Good - Proceedings of the 3rd International Conference on Hybrid Human-Artificial Intelligence |
Editors | Fabian Lorig, Jason Tucker, Adam Dahlgren Lindstrom, Frank Dignum, Pradeep Murukannaiah, Andreas Theodorou, Pinar Yolum |
Publisher | IOS Press |
Pages | 145-161 |
Number of pages | 17 |
ISBN (Electronic) | 9781643685229 |
DOIs | |
Publication status | Published - 5 Jun 2024 |
MoE publication type | A4 Conference publication |
Event | International Conference on Hybrid Human-Artificial Intelligence - Malmö, Sweden Duration: 10 Jun 2024 → 14 Jun 2024 Conference number: 3 |
Publication series
Name | Frontiers in Artificial Intelligence and Applications |
---|---|
Volume | 386 |
ISSN (Print) | 0922-6389 |
ISSN (Electronic) | 1879-8314 |
Conference
Conference | International Conference on Hybrid Human-Artificial Intelligence |
---|---|
Abbreviated title | HHAI |
Country/Territory | Sweden |
City | Malmö |
Period | 10/06/2024 → 14/06/2024 |
Keywords
- annotation
- bias
- crowdworkers
- labeler bias
- machine learning