In Europe, many river beds have been very strongly modified over the industrial revolution to act as drainage without real long term thoughts. Some of these rivers are now being “renaturalized” to improve their flood control and other environment services. This is achived by re-creating meanders or removing dams, but there is a lack of tools to monitor the effect of these changes on the river bed, shores and vegetation over time.
The objective of this project is the adaptation and development of robotic techniques suitable for geometric three dimensional reconstruction of the natural environment, in the context of a wearable sensor suite. This has been partly achieved using sensor fusion techniques with combination of cameras, lasers and an inertial measurement unit, to geometrically reconstruct the surrounding scene as well as estimate the trajectory.
By supporting cameras with laser depth information, we show that it is possible to stabilize and recover scale for visual maps. We also show that factor graphs are powerful tools for sensor fusion, and can be used for a more generalized approach involving multiple sensors. What’s further planned is the definition of a point cloud topology using learning-based techniques to generate semantic representations of natural environment constituents such as trees.
Finally, a temporal alignment of the global map is planned by fusion of semantic information with geometric constraints.
This project was supported until 2020 by the French region “Grand Est”, the Water Agency for the Rhine and Meuse Watershed and the Zone Atelier Moselle.
Work realised by Georges Chahine.
A video showing a colored laser point cloud being manipulated. The colors have been inferred by averaging the red, green and blue channels of the 10 nearest neighbor points in the corresponding visual map from visual odometry:
Semantic laser point cloud, using RS bPearl around lac Symphonie in Metz, France. Pixel classification was performed on the image level using a pretrained network from https://github.com/maunzzz/cross-season-segmentation. The semantic knowledge was then copied to the laser points by simple (pinhole) projection of 3D points on the classified images. The data was captured using a wearable sensor suite.
Year long In-depth navigation inside point clouds:
Timelapse of aligned 3D point clouds showing natural changes:
Reconstruction techniques adapted to natural environment are also valid for urban reconstruction/alignment. Below is a preview of our natural environment mapping pipeline, being used in an urban setting (here Prague, Czeck Republic).
Day / Night data collection and temporal alignment around historical sites in Prague:
More information in the following publications:
 G. Chahine and C. Pradalier, “Survey of monocular SLAM algorithms in natural environments”, in 15th Conference on Computer and Robot Vision (CRV), Toronto, 2018.
 G. Chahine and C. Pradalier, “Laser-Supported Monocular Visual Tracking for Natural Environments”, in The 19th International Conference on Advanced Robotics, Belo Horizonte, 2019.
 G. Chahine; M. Vaidis; F. Pomerleau; C. Pradalier. “Mapping in unstructured natural environment: a sensor fusion framework for wearable sensor suites”. SN Applied Sciences, 2021.
© 2019 – 2023 DREAM Lab – Georgia Tech