SeeNav: Seamless and Energy-Efficient Indoor Navigation using Augmented Reality

Marius Noreikis, Yu Xiao, Antti Ylä-Jääski

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

9 Citations (Scopus)


Augmented Reality (AR) based navigation has emerged as an impressive, yet seamless way of guiding users in unknown environments. Its quality of experience depends on many factors, including the accuracy of camera pose estimation, response delay, and energy consumption. In this paper, we present SeeNav - a seamless and energy-efficient AR navigation system for indoor environments. SeeNav combines image-based localization and inertial tracking to provide an accurate and robust camera pose estimation. As vision processing is much more compute intensive than the processing of inertial sensor data, SeeNav offloads the former one from resource-constrained mobile devices to a cloud to improve tracking performance and reduce power consumption. More than that, SeeNav implements a context-aware task scheduling algorithm that further minimizes energy consumption while maintaining the accuracy of camera pose estimation. Our experimental results, including a user study, show that SeeNav provides seamless navigation experience and reduces the overall energy consumption by 21.56% with context-aware task scheduling.
Original languageEnglish
Title of host publicationProceedings of the Thematic Workshops of ACM Multimedia 2017
Subtitle of host publicationThematic Workshops '17
Number of pages8
ISBN (Electronic)978-1-4503-5416-5
Publication statusPublished - 23 Oct 2017
MoE publication typeA4 Article in a conference publication
EventACM Multimedia - Mountain View, United States
Duration: 23 Oct 201727 Oct 2017
Conference number: 25


ConferenceACM Multimedia
Abbreviated titleACMMM
CountryUnited States
CityMountain View


  • Augmented reality
  • indoor navigation
  • energy-efficient

Fingerprint Dive into the research topics of 'SeeNav: Seamless and Energy-Efficient Indoor Navigation using Augmented Reality'. Together they form a unique fingerprint.

Cite this