PIVO: Probabilistic Inertial-Visual Odometry for Occlusion-Robust Navigation

Arno Solin*, Santiago Cortés Reina, Esa Rahtu, Juho Kannala

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

15 Citations (Scopus)
124 Downloads (Pure)


This paper presents a novel method for visual-inertial odometry. The method is based on an information fusion framework employing low-cost IMU sensors and the monocular camera in a standard smartphone. We formulate a sequential inference scheme, where the IMU drives the dynamical model and the camera frames are used in coupling trailing sequences of augmented poses. The novelty in the model is in taking into account all the cross-terms in the updates, thus propagating the inter-connected uncertainties throughout the model. Stronger coupling between the inertial and visual data sources leads to robustness against occlusion and feature-poor environments. We demonstrate results on data collected with an iPhone and provide comparisons against the Tango device and using the EuRoC data set.

Original languageEnglish
Title of host publicationProceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018
Number of pages10
ISBN (Electronic)9781538648865
Publication statusPublished - 2018
MoE publication typeA4 Article in a conference publication
EventIEEE Winter Conference on Applications of Computer Vision - New York, United States
Duration: 12 Mar 201815 Mar 2018
Conference number: 18

Publication series

NameIEEE Winter Conference on Applications of Computer Vision
ISSN (Print)2472-6737


ConferenceIEEE Winter Conference on Applications of Computer Vision
Abbreviated titleWACV
CountryUnited States
CityNew York



Cite this