Locating places in cities is typically facilitated by handheld mobile devices, which draw the visual attention of the user on the screen of the device instead of the surroundings. In this research, we aim at strengthening the connection between people and their surroundings through enabling mid-air gestural interaction with real-world landmarks and delivering information through audio to retain users’ visual attention on the scene. Recent research on gesture-based and haptic techniques for such purposes has mainly considered handheld devices that eventually direct users’ attention back to the devices. We contribute a hand-worn, mid-air gestural interaction design with directional vibrotactile guidance for finding points of interest (POIs). Through three design iterations, we address aspects of (1) sensing technologies and the placement of actuators considering users’ instinctive postures, (2) the feasibility of finding and fetching information regarding landmarks without visual feedback, and (3) the benefits of such interaction in a tourist application. In a final evaluation, participants located POIs and fetched information by pointing and following directional guidance, thus realising a vision in which they found and experienced real-world landmarks while keeping their visual attention on the scene. The results show that the interaction technique has comparable performance to a visual baseline, enables high mobility, and facilitates keeping visual attention on the surroundings.
|Julkaisu||Personal and Ubiquitous Computing|
|Varhainen verkossa julkaisun päivämäärä||1 tammikuuta 2018|
|DOI - pysyväislinkit|
|Tila||Julkaistu - helmikuuta 2019|
|OKM-julkaisutyyppi||A1 Julkaistu artikkeli, soviteltu|