ViNav: A Vision-based Indoor Navigation System for Smartphones

Research output: Contribution to journalArticleScientificpeer-review


Research units


Smartphone-based indoor navigation services are desperately needed in indoor environments. However, the adoption of them has been relatively slow, due to the lack of ne-grained and up-to-date indoor maps, or the potentially high deployment and maintenance cost of infrastructure-based indoor localization solutions. This work proposes ViNav, a scalable and cost-effcient system that implements indoor mapping, localization and navigation based on visual and inertial sensor data collected from smartphones. ViNav applies structure-from-motion (SfM) techniques to reconstruct 3D models of indoor environments from crowdsourced images, locates points of interest (POI) in 3D models, and compiles navigation meshes for path finding. ViNav implements image-based localization that identifies users' positions and facing directions, and leverages this feature to calibrate dead-reckoning-based user trajectories and sensor fingerprints collected along the trajectories. The calibrated information is utilized for building more informative and accurate indoor maps, and lowering the response delay of localization requests. According to our experimental results in a university building and a supermarket, the system works properly and our indoor localization achieves competitive performance compared with traditional approaches: in a supermarket, ViNav locates users within 2 seconds, with a distance error less than 1 meter and a facing direction error less than 6 degrees.


Original languageEnglish
Number of pages14
JournalIEEE Transactions on Mobile Computing
Publication statusE-pub ahead of print - 19 Jul 2018
MoE publication typeA1 Journal article-refereed

    Research areas

  • 3D modelling, Buildings, Data models, Indoor localization, Indoor mapping, Indoor navigation, Mobile crowdsensing, Solid modeling, Three-dimensional displays, Trajectory

ID: 27135919