Crowdsourcing Subjective Annotations Using Pairwise Comparisons Reduces Bias and Error Compared to the Majority-vote Method

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)
77 Downloads (Pure)

Abstract

How to better reduce measurement variability and bias introduced by subjectivity in crowdsourced labelling remains an open question. We introduce a theoretical framework for understanding how random error and measurement bias enter into crowdsourced annotations of subjective constructs. We then propose a pipeline that combines pairwise comparison labelling with Elo scoring, and demonstrate that it outperforms the ubiquitous majority-voting method in reducing both types of measurement error. To assess the performance of the labelling approaches, we constructed an agent-based model of crowdsourced labelling that lets us introduce different types of subjectivity into the tasks. We find that under most conditions with task subjectivity, the comparison approach produced higher f1 scores. Further, the comparison approach is less susceptible to inflating bias, which majority voting tends to do. To facilitate applications, we show with simulated and real-world data that the number of required random comparisons for the same classification accuracy scales log-linearly O(N log N) with the number of labelled items. We also implemented the Elo system as an open-source Python package.

Original languageEnglish
Article number3610183
JournalProceedings of the ACM on Human-Computer Interaction
Volume7
Issue numberCSCW2
DOIs
Publication statusPublished - 4 Oct 2023
MoE publication typeA1 Journal article-refereed

Keywords

  • comparison method
  • crowdsourcing
  • majority-vote method
  • subjectivity

Fingerprint

Dive into the research topics of 'Crowdsourcing Subjective Annotations Using Pairwise Comparisons Reduces Bias and Error Compared to the Majority-vote Method'. Together they form a unique fingerprint.

Cite this