Investigating Labeler Bias in Face Annotation for Machine Learning

Luke Haliburton*, Sinksar Ghebremedhin, Robin Welsch, Albrecht Schmidt, Sven Mayer

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

8 Downloads (Pure)

Abstract

In a world increasingly reliant on artificial intelligence, it is more important than ever to consider the ethical implications of artificial intelligence. One key under-explored challenge is labeler bias - bias introduced by individuals who label datasets - which can create inherently biased datasets for training and subsequently lead to inaccurate or unfair decisions in healthcare, employment, education, and law enforcement. Hence, we conducted a study (N=98) to investigate and measure the existence of labeler bias using images of people from different ethnicities and sexes in a labeling task. Our results show that participants hold stereotypes that influence their decision-making process and that labeler demographics impact assigned labels. We also discuss how labeler bias influences datasets and, subsequently, the models trained on them. Overall, a high degree of transparency must be maintained throughout the entire artificial intelligence training process to identify and correct biases in the data as early as possible.

Original languageEnglish
Title of host publicationHHAI 2024
Subtitle of host publicationHybrid Human AI Systems for the Social Good - Proceedings of the 3rd International Conference on Hybrid Human-Artificial Intelligence
EditorsFabian Lorig, Jason Tucker, Adam Dahlgren Lindstrom, Frank Dignum, Pradeep Murukannaiah, Andreas Theodorou, Pinar Yolum
PublisherIOS Press
Pages145-161
Number of pages17
ISBN (Electronic)9781643685229
DOIs
Publication statusPublished - 5 Jun 2024
MoE publication typeA4 Conference publication
EventInternational Conference on Hybrid Human-Artificial Intelligence - Malmö, Sweden
Duration: 10 Jun 202414 Jun 2024
Conference number: 3

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume386
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314

Conference

ConferenceInternational Conference on Hybrid Human-Artificial Intelligence
Abbreviated titleHHAI
Country/TerritorySweden
CityMalmö
Period10/06/202414/06/2024

Keywords

  • annotation
  • bias
  • crowdworkers
  • labeler bias
  • machine learning

Fingerprint

Dive into the research topics of 'Investigating Labeler Bias in Face Annotation for Machine Learning'. Together they form a unique fingerprint.

Cite this