Adaptive feature guidance: Modelling visual search with graphical layouts

Jussi P.P. Jokinen*, Zhenxin Wang, Sayan Sarcar, Antti Oulasvirta, Xiangshi Ren

*Tämän työn vastaava kirjoittaja

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

17 Sitaatiot (Scopus)
196 Lataukset (Pure)


We present a computational model of visual search on graphical layouts. It assumes that the visual system is maximising expected utility when choosing where to fixate next. Three utility estimates are available for each visual search target: one by unguided perception only, and two, where perception is guided by long-term memory (location or visual feature). The system is adaptive, starting to rely more upon long-term memory when its estimates improve with experience. However, it needs to relapse back to perception-guided search if the layout changes. The model provides a tool for practitioners to evaluate how easy it is to find an item for a novice or an expert, and what happens if a layout is changed. The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more robust to changes, and (3) moving a non-salient element far away from original location is particularly damaging. The model provided a good match with human data in a study with realistic graphical layouts.

JulkaisuInternational Journal of Human Computer Studies
DOI - pysyväislinkit
TilaJulkaistu - 1 huhtik. 2020
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä


Sukella tutkimusaiheisiin 'Adaptive feature guidance: Modelling visual search with graphical layouts'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä