Gaze is an important indicator of visual attention and knowledge of gaze location can be used to improve and augment Virtual Reality (VR) experiences. This has led to the development of VR Head Mounted Displays (HMD) with inbuilt gaze trackers. Given the latency constraints of VR, foreknowledge of gaze, i.e., before it is reported by the gaze tracker, can similarly be leveraged to preemptively apply gaze-based improvements and augmentations to a VR experience, especially in distributed VR architectures. In this paper, we propose a light weight neural network based method utilizing only past HMD pose and gaze data to predict future gaze locations, forgoing computationally heavy saliency computation. Most work in this domain has focused on either 360°or ego-centric video or synthetic VR content with rather naive interaction dynamics like free viewing or supervised visual search tasks. Our solution considers data from the exhaustive OpenNEEDs dataset which contains 6 Degrees of Freedom (6DoF) data captured in VR experiences with subjects given the freedom to explore the VR scene and/or to engage in tasks. Our solution outperforms the very strict baseline: current gaze to predict gaze in real-time for sub 150ms prediction horizons for VR use-cases.
Original languageEnglish
Number of pages6
Publication statusPublished - 11 Jul 2022
MoE publication typeNot Eligible
EventInternational Workshop on Immersive Mixed and Virtual Environment Systems - Athlone, Ireland
Duration: 14 Jun 2022 → …
Conference number: 14


WorkshopInternational Workshop on Immersive Mixed and Virtual Environment Systems
Abbreviated titleMMVE
Period14/06/2022 → …
Internet address


  • Virtual Reality (VR)
  • Gaze Prediction
  • Neural Networks


Dive into the research topics of 'Real-time gaze prediction in virtual reality'. Together they form a unique fingerprint.

Cite this