Robust and practical depth map fusion for time-of-flight cameras

Markus Ylimäki*, Juho Kannala, Janne Heikkilä

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

1 Citation (Scopus)


Fusion of overlapping depth maps is an important part in many 3D reconstruction pipelines. Ideally fusion produces an accurate and nonredundant point cloud robustly even from noisy and partially poorly registered depth maps. In this paper, we improve an existing fusion algorithm towards a more ideal solution. Our method builds a nonredundant point cloud from a sequence of depth maps so that the new measurements are either added to the existing point cloud if they are in an area which is not yet covered or used to refine the existing points. The method is robust to outliers and erroneous depth measurements as well as small depth map registration errors due to inaccurate camera poses. The results show that the method overcomes its predecessor both in accuracy and robustness.

Original languageEnglish
Title of host publicationImage Analysis - 20th Scandinavian Conference, SCIA 2017, Proceedings
Number of pages13
Publication statusPublished - 2017
MoE publication typeA4 Article in a conference publication
EventScandinavian Conference on Image Analysis - Tromso, Norway
Duration: 12 Jun 201714 Jun 2017
Conference number: 20

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10269 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349


ConferenceScandinavian Conference on Image Analysis
Abbreviated titleSCIA


  • Depth map merging
  • RGB-D reconstruction


Dive into the research topics of 'Robust and practical depth map fusion for time-of-flight cameras'. Together they form a unique fingerprint.

Cite this