A video showing a colored laser point cloud being manipulated. The colors have been inferred by averaging the red, green and blue channels of the 10 nearest neighbor points in the corresponding visual map from visual odometry:
Semantic laser point cloud, using RS bPearl around lac Symphonie in Metz, France. Pixel classification was performed on the image level using a pretrained network from https://github.com/maunzzz/cross-season-segmentation. The semantic knowledge was then copied to the laser points by simple (pinhole) projection of 3D points on the classified images. The data was captured using a wearable sensor suite.
More information in the following publications:
 G. Chahine and C. Pradalier, “Survey of monocular SLAM algorithms in natural environments,” in 15th Conference on Computer and Robot Vision (CRV), Toronto, 2018.
 G. Chahine and C. Pradalier, “Laser-Supported Monocular Visual Tracking for Natural Environments,” in The 19th International Conference on Advanced Robotics, Belo Horizonte, 2019.
 G. Chahine, M. Vaidis, F. Pomerleau, and C. Pradalier, “Mapping in unstructured natural environment: A sensor fusion framework for wearable sensor suites”, Journal of Intelligent & Robotic Systems, 2020 (under review).
© 2019 DREAM Lab – Georgia Tech Lorraine