VisRecall++ : Analysing and Predicting Visualisation Recallability from Gaze Behaviour

Yao Wang, Yue Jiang, Zhiming Hu, Constantin Ruhdorfer, Mihai Bâce, Andreas Bulling

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

Abstrakti

Question answering has recently been proposed as a promising means to assess the recallability of information visualisations. However, prior works are yet to study the link between visually encoding a visualisation in memory and recall performance. To fill this gap, we propose VisRecall++ – a novel 40-participant recallability dataset that contains gaze data on 200 visualisations and 1,000 questions, including identifying the title and retrieving values. We measured recallability by asking participants questions after they observed the visualisation for 10 seconds. Our analyses reveal several insights, such as saccade amplitude, number of fixations, and fixation duration significantly differ between high and low recallability groups. Finally, we propose GazeRecallNet – a novel computational method to predict recallability from gaze behaviour that outperforms the state-of-the-art model RecallNet and three other baselines on this task. Taken together, our results shed light on assessing recallability from gaze behaviour and inform future work on recallability-based visualisation optimisation.

AlkuperäiskieliEnglanti
Artikkeli239
JulkaisuProceedings of the ACM on Human-Computer Interaction
Vuosikerta8
NumeroETRA
DOI - pysyväislinkit
TilaJulkaistu - 28 toukok. 2024
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Sormenjälki

Sukella tutkimusaiheisiin 'VisRecall++ : Analysing and Predicting Visualisation Recallability from Gaze Behaviour'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä