Recently, millimeter-wave radar-on-chip sensors such as Google Soli have become readily available in the mobile ecosystem. We envision such radar technology to be integrated into wearables to enable gesture-based interaction possibilities for users ĝ€"on the go', e.g. to control various devices such as phone, car infotainment system, etc. even when the sensor is occluded by some material such as fabrics. Towards achieving this vision, we developed a hybrid CNN+LSTM deep learning model, and conducted a systematic study investigating mid-air gesture recognition performance when the radar sensor was covered by three different fabrics (leather, wool, and cotton). We show that, when trained on no occluding material, the model performed worse than if trained with each of the three fabrics; however, this is only valid in the small data regime (N=20). When trained with large samples (N=200) on no occluding material, the model achieved remarkable performance also when the sensor was covered by each of the fabrics (95% avg. accuracy, 99% AUC). Our results show that sensing mid-air gestures through fabrics is both feasible and ready for practical applications, since it is not necessary to train a dedicated model for each type of fabric available in the market. We also contribute a repeatable procedure to systematically test mid-air gestures with radar technology, enabled by an experimental platform that we release with this paper.
|DOI - pysyväislinkit|
|Tila||Julkaistu - 10 toukokuuta 2020|
|Tapahtuma||International Conference on Human-Computer Interaction with Mobile Devices and Services - Oldenburg, Saksa|
Kesto: 5 lokakuuta 2020 → 9 lokakuuta 2020
|Conference||International Conference on Human-Computer Interaction with Mobile Devices and Services|
|Ajanjakso||05/10/2020 → 09/10/2020|