Long-term River Monitoring

In Europe, many river beds have been very strongly modified over the industrial revolution to act as drainage without real long term thoughts. Some of these rivers are now being “renaturalized” to improve their flood control and other environment services. This is achived by re-creating meanders or removing dams, but there is a lack of tools to monitor the effect of these changes on the river bed, shores and vegetation over time.

The objective of this project is the adaptation and development of robotic techniques suitable for geometric three dimensional reconstruction of the natural environment, in the context of a wearable sensor suite. This has been partly achieved using sensor fusion techniques with combination of cameras, lasers and an inertial measurement unit, to geometrically reconstruct the surrounding scene as well as estimate the trajectory.

By supporting cameras with laser depth information, we show that it is possible to stabilize and recover scale for visual maps. We also show that factor graphs are powerful tools for sensor fusion, and can be used for a more generalized approach involving multiple sensors. What’s further planned is the definition of a point cloud topology using learning-based techniques to generate semantic representations of natural environment constituents such as trees.

Finally, a temporal alignment of the global map is planned by fusion of semantic information with geometric constraints.

This project was supported until 2020 by the French region “Grand Est”, the Water Agency for the Rhine and Meuse Watershed and the Zone Atelier Moselle.

Work realised by Georges Chahine.

Illustration of our acquisition system: Left: data acquisition in a snowy forest in Quebec, Canada; Middle-top: backpack CAD model, showing the Ouster-16 lidar model (bottom) on a custom-mademount to avoid interference with the field of view of the camera (top); Right: ground truth recording using the theodolite total station.

Some results:

Depth image from laser supported DSO, showing projected laser rays. The laser projection supports visual tracking in terms of stability and correct scale estimation.
3D reconstruction using laser-supported visual odometry of natural environment showing an open field with some trees.
3D reconstruction using laser-supported visual odometry of the surrondings of Georgia Tech's Metz campus in France.

A video showing a colored laser point cloud being manipulated. The colors have been inferred by averaging the red, green and blue channels of the 10 nearest neighbor points in the corresponding visual map from visual odometry:

Semantic laser point cloud, using RS bPearl around lac Symphonie in Metz, France. Pixel classification was performed on the image level using a pretrained network from https://github.com/maunzzz/cross-season-segmentation. The semantic knowledge was then copied to the laser points by simple (pinhole) projection of 3D points on the classified images. The data was captured using a wearable sensor suite.

Year long In-depth navigation inside point clouds:

Timelapse of aligned 3D point clouds showing natural changes:

Urban Reconstruction

Reconstruction techniques adapted to natural environment are also valid for urban reconstruction/alignment. Below is a preview of our natural environment mapping pipeline, being used in an urban setting (here Prague, Czeck Republic).

Day / Night data collection and temporal alignment around historical sites in Prague: 

Visehrad Castle: 

More information in the following publications:

[1] G. Chahine and C. Pradalier, “Survey of monocular SLAM algorithms in natural environments”, in 15th Conference on Computer and Robot Vision (CRV), Toronto, 2018.
[2] G. Chahine and C. Pradalier, “Laser-Supported Monocular Visual Tracking for Natural Environments”, in The 19th International Conference on Advanced Robotics, Belo Horizonte, 2019.
[3] G. Chahine; M. Vaidis; F. Pomerleau; C. Pradalier. “Mapping in unstructured natural environment: a sensor fusion framework for wearable sensor suites”. SN Applied Sciences, 2021.

© 2019 – 2023 DREAM Lab – Georgia Tech