Crowdsourcing for Information Visualization: Promises and Pitfalls

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review


Original languageEnglish
Title of host publicationEvaluation in the Crowd: Crowdsourcing and Human-Centered Experiments
Subtitle of host publicationDagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015 Revised Contributions
EditorsDaniel Archambault, Helen Purchase, Tobias Hossfeld
Publication statusPublished - 2017
MoE publication typeA3 Part of a book or another research book

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


  • Rita Borgo
  • Bongshin Lee
  • Benjamin Bach
  • Sara Fabrikant
  • Radu Jianu
  • Andreas Kerren
  • Stephen Kobourov
  • Fintan McGee
  • Luana Micallef

  • Tatiana von Landesberger
  • Katrin Ballweg
  • Stephan Diehl
  • Paolo Simonetto
  • Michelle Zhou

Research units

  • King’s College London
  • Microsoft Research
  • University of Zurich
  • City University London
  • Linnaeus University
  • University of Arizona
  • Luxembourg Institute of Science and Technology
  • University of Darmstadt
  • Trier University
  • Swansea University
  • Juji


Crowdsourcing offers great potential to overcome the limitations of controlled lab studies. To guide future designs of crowdsourcing-based studies for visualization, we review visualization research that has attempted to leverage crowdsourcing for empirical evaluations of visualizations. We discuss six core aspects for successful employment of crowdsourcing in empirical studies for visualization – participants, study design, study procedure, data, tasks, and metrics & measures. We then present four case studies, discussing potential mechanisms to overcome common pitfalls. This chapter will help the visualization community understand how to effectively and efficiently take advantage of the exciting potential crowdsourcing has to offer to support empirical visualization research.

ID: 10448720