Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks

Mikael Reichler, Josef Taher*, Petri Manninen, Harri Kaartinen, Juha Hyyppä, Antero Kukko

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

22 Downloads (Pure)


Real-time semantic segmentation of point clouds has increasing importance in applications related to 3D city modelling and mapping, automated inventory of forests, autonomous driving and mobile robotics. Current state-of-the-art point cloud semantic segmentation methods rely heavily on the availability of 3D laser scanning data. This is problematic in regards of low-latency, real-time applications that use data from high-precision mobile laser scanners, as those are typically 2D line scanning devices. In this study, we experiment with real-time semantic segmentation of high-density multispectral point clouds collected from 2D line scanners in urban environments using encoder - decoder convolutional neural network architectures. We introduce a rasterized multi-scan input format that can be constructed exclusively from the raw (non-georeferenced profiles) 2D laser scanner measurement stream without odometry information. In addition, we investigate the impact of multispectral data on the segmentation accuracy. The dataset used for training, validation and testing was collected with multispectral FGI AkhkaR4-DW backpack laser scanning system operating at the wavelengths of 905 nm and 1550 nm, and consists in total of 228 million points (39 583 scans). The data was divided into 13 classes that represent various targets in urban environments. The results show that the increased spatial context of the multi-scan format improves the segmentation performance on the single-wavelength lidar dataset from 45.4 mIoU (a single scan) to 62.1 mIoU (24 consecutive scans). In the multispectral point cloud experiments we achieved a 71 % and 28 % relative increase in the segmentation mIoU (43.5 mIoU) as compared to the purely single-wavelength reference experiments, in which we achieved 25.4 mIoU (905 nm) and 34.1 mIoU (1550 nm). Our findings show that it is possible to semantically segment 2D line scanner data with good results by combining consecutive scans without the need for odometry information. The results also serve as motivation for developing multispectral mobile laser scanning systems that can be used in challenging urban surveys.

Original languageEnglish
Article number100061
Pages (from-to)1-17
Number of pages17
JournalISPRS Open Journal of Photogrammetry and Remote Sensing
Publication statusPublished - Apr 2024
MoE publication typeA1 Journal article-refereed


  • Convolutional neural network
  • Deep learning
  • Mobile laser scanning
  • Multispectral point cloud
  • Real-time
  • Semantic segmentation


Dive into the research topics of 'Semantic segmentation of raw multispectral laser scanning data from urban environments with deep neural networks'. Together they form a unique fingerprint.

Cite this