VisRecall: Quantifying Information Visualisation Recallability Via Question Answering

Yao Wang, Chuhan Jiao, Mihai Bace, Andreas Bulling

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)

Abstract

Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work, we propose a question-answering paradigm to study visualisation recallability and present VisRecall - a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions of five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisRecall and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. Our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.

Original languageEnglish
Pages (from-to)4995-5005
Number of pages11
JournalIEEE Transactions on Visualization and Computer Graphics
Volume28
Issue number12
Early online date2022
DOIs
Publication statusPublished - 1 Dec 2022
MoE publication typeA1 Journal article-refereed

Keywords

  • Bars
  • Computational modeling
  • Data visualization
  • Image recognition
  • Information visualisation
  • machine learning
  • memorability
  • Question answering (information retrieval)
  • recallability
  • Task analysis
  • Visualization

Fingerprint

Dive into the research topics of 'VisRecall: Quantifying Information Visualisation Recallability Via Question Answering'. Together they form a unique fingerprint.

Cite this