Abstract
Robust and accurate six degree-of-freedom tracking on portable devices remains a challenging problem, especially on small hand-held devices such as smartphones. For improved robustness and accuracy, complementary movement information from an IMU and a camera is often fused. Conventional visual-inertial methods fuse information from IMUs with a sparse cloud of feature points tracked by the device camera. We consider a visually dense approach, where the IMU data is fused with the dense optical flow field estimated from the camera data. Learning-based methods applied to the full image frames can leverage visual cues and global consistency of the flow field to improve the flow estimates. We show how a learning-based optical flow model can be combined with conventional inertial navigation, and how ideas from probabilistic deep learning can aid the robustness of the measurement updates. The practical applicability is demonstrated on real-world data acquired by an iPad in a challenging low-texture environment.
Original language | English |
---|---|
Title of host publication | Proceedings of 2020 23rd International Conference on Information Fusion, FUSION 2020 |
Publisher | IEEE |
ISBN (Electronic) | 9780578647098 |
DOIs | |
Publication status | Published - Jul 2020 |
MoE publication type | A4 Article in a conference publication |
Event | International Conference on Information Fusion - Virtual, Online, South Africa Duration: 6 Jul 2020 → 9 Jul 2020 Conference number: 23 |
Conference
Conference | International Conference on Information Fusion |
---|---|
Abbreviated title | FUSION |
Country | South Africa |
City | Virtual, Online |
Period | 06/07/2020 → 09/07/2020 |