Abstract
Predicting how users learn new or changed interfaces is a long-standing objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices' typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems |
Publisher | ACM |
Pages | 4203-4215 |
Number of pages | 13 |
ISBN (Electronic) | 978-1-4503-4655-9 |
DOIs | |
Publication status | Published - 2017 |
MoE publication type | A4 Conference publication |
Event | ACM SIGCHI Annual Conference on Human Factors in Computing Systems - Colorado Convention Center, Denver, United States Duration: 6 May 2017 → 11 May 2017 Conference number: 35 https://chi2017.acm.org/ |
Conference
Conference | ACM SIGCHI Annual Conference on Human Factors in Computing Systems |
---|---|
Abbreviated title | ACM CHI |
Country/Territory | United States |
City | Denver |
Period | 06/05/2017 → 11/05/2017 |
Internet address |
Keywords
- visual search
- keyboard layouts
- models of learning
Fingerprint
Dive into the research topics of 'Modelling Learning of New Keyboard Layouts'. Together they form a unique fingerprint.Prizes
-
CHI Best Paper Award x 2
Oulasvirta, Antti (Recipient), 2017
Prize: Award or honor granted for a specific work