Adaptive feature guidance: Modelling visual search with graphical layouts

Jussi P.P. Jokinen*, Zhenxin Wang, Sayan Sarcar, Antti Oulasvirta, Xiangshi Ren

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

16 Citations (Scopus)
165 Downloads (Pure)

Abstract

We present a computational model of visual search on graphical layouts. It assumes that the visual system is maximising expected utility when choosing where to fixate next. Three utility estimates are available for each visual search target: one by unguided perception only, and two, where perception is guided by long-term memory (location or visual feature). The system is adaptive, starting to rely more upon long-term memory when its estimates improve with experience. However, it needs to relapse back to perception-guided search if the layout changes. The model provides a tool for practitioners to evaluate how easy it is to find an item for a novice or an expert, and what happens if a layout is changed. The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more robust to changes, and (3) moving a non-salient element far away from original location is particularly damaging. The model provided a good match with human data in a study with realistic graphical layouts.

Original languageEnglish
Article number102376
Number of pages22
JournalInternational Journal of Human Computer Studies
Volume136
DOIs
Publication statusPublished - 1 Apr 2020
MoE publication typeA1 Journal article-refereed

Keywords

  • Computational modelling
  • Learning
  • Visual search

Fingerprint

Dive into the research topics of 'Adaptive feature guidance: Modelling visual search with graphical layouts'. Together they form a unique fingerprint.

Cite this