Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussavertaisarvioitu

Standard

Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement. / Ylimäki, Markus; Kannala, Juho; Heikkilä, Janne.

2018 24th International Conference on Pattern Recognition, ICPR 2018. IEEE, 2018. s. 1977-1982 8545508 (International Conference on Pattern Recognition).

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussavertaisarvioitu

Harvard

Ylimäki, M, Kannala, J & Heikkilä, J 2018, Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement. julkaisussa 2018 24th International Conference on Pattern Recognition, ICPR 2018., 8545508, International Conference on Pattern Recognition, IEEE, Sivut 1977-1982, Beijing, Kiina, 20/08/2018. https://doi.org/10.1109/ICPR.2018.8545508

APA

Ylimäki, M., Kannala, J., & Heikkilä, J. (2018). Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement. teoksessa 2018 24th International Conference on Pattern Recognition, ICPR 2018 (Sivut 1977-1982). [8545508] (International Conference on Pattern Recognition). IEEE. https://doi.org/10.1109/ICPR.2018.8545508

Vancouver

Ylimäki M, Kannala J, Heikkilä J. Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement. julkaisussa 2018 24th International Conference on Pattern Recognition, ICPR 2018. IEEE. 2018. s. 1977-1982. 8545508. (International Conference on Pattern Recognition). https://doi.org/10.1109/ICPR.2018.8545508

Author

Ylimäki, Markus ; Kannala, Juho ; Heikkilä, Janne. / Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement. 2018 24th International Conference on Pattern Recognition, ICPR 2018. IEEE, 2018. Sivut 1977-1982 (International Conference on Pattern Recognition).

Bibtex - Lataa

@inproceedings{248716f2492143fda41b297f0fc34c50,
title = "Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement",
abstract = "Depth map fusion is an essential part in both stereo and RGB-D based 3- D reconstruction pipelines. Whether produced with a passive stereo reconstruction or using an active depth sensor, such as Microsoft Kinect, the depth maps have noise and may have poor initial registration. In this paper, we introduce a method which is capable of handling outliers, and especially, even significant registration errors. The proposed method first fuses a sequence of depth maps into a single non-redundant point cloud so that the redundant points are merged together by giving more weight to more certain measurements. Then, the original depth maps are re-registered to the fused point cloud to refine the original camera extrinsic parameters. The fusion is then performed again with the refined extrinsic parameters. This procedure is repeated until the result is satisfying or no significant changes happen between iterations. The method is robust to outliers and erroneous depth measurements as well as even significant depth map registration errors due to inaccurate initial camera poses.",
author = "Markus Ylim{\"a}ki and Juho Kannala and Janne Heikkil{\"a}",
year = "2018",
month = "11",
day = "26",
doi = "10.1109/ICPR.2018.8545508",
language = "English",
series = "International Conference on Pattern Recognition",
publisher = "IEEE",
pages = "1977--1982",
booktitle = "2018 24th International Conference on Pattern Recognition, ICPR 2018",
address = "United States",

}

RIS - Lataa

TY - GEN

T1 - Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement

AU - Ylimäki, Markus

AU - Kannala, Juho

AU - Heikkilä, Janne

PY - 2018/11/26

Y1 - 2018/11/26

N2 - Depth map fusion is an essential part in both stereo and RGB-D based 3- D reconstruction pipelines. Whether produced with a passive stereo reconstruction or using an active depth sensor, such as Microsoft Kinect, the depth maps have noise and may have poor initial registration. In this paper, we introduce a method which is capable of handling outliers, and especially, even significant registration errors. The proposed method first fuses a sequence of depth maps into a single non-redundant point cloud so that the redundant points are merged together by giving more weight to more certain measurements. Then, the original depth maps are re-registered to the fused point cloud to refine the original camera extrinsic parameters. The fusion is then performed again with the refined extrinsic parameters. This procedure is repeated until the result is satisfying or no significant changes happen between iterations. The method is robust to outliers and erroneous depth measurements as well as even significant depth map registration errors due to inaccurate initial camera poses.

AB - Depth map fusion is an essential part in both stereo and RGB-D based 3- D reconstruction pipelines. Whether produced with a passive stereo reconstruction or using an active depth sensor, such as Microsoft Kinect, the depth maps have noise and may have poor initial registration. In this paper, we introduce a method which is capable of handling outliers, and especially, even significant registration errors. The proposed method first fuses a sequence of depth maps into a single non-redundant point cloud so that the redundant points are merged together by giving more weight to more certain measurements. Then, the original depth maps are re-registered to the fused point cloud to refine the original camera extrinsic parameters. The fusion is then performed again with the refined extrinsic parameters. This procedure is repeated until the result is satisfying or no significant changes happen between iterations. The method is robust to outliers and erroneous depth measurements as well as even significant depth map registration errors due to inaccurate initial camera poses.

U2 - 10.1109/ICPR.2018.8545508

DO - 10.1109/ICPR.2018.8545508

M3 - Conference contribution

T3 - International Conference on Pattern Recognition

SP - 1977

EP - 1982

BT - 2018 24th International Conference on Pattern Recognition, ICPR 2018

PB - IEEE

ER -

ID: 31028961