Autonomous Exploration in Natural Environments

Motivation

Exploring an environment with a robot involves starting from an initial state where the surroundings of the robot are entirely unknown. As the robot interacts with the environment, it gradually builds a representation of its surroundings. While this may seem straightforward, several essential concepts need to be defined.

Firstly, what constitutes a relevant representation of the robot’s surroundings? Is it designed for the robot to interact with directly, or is it intended for analysis by a human operator? Answering this question is fundamentally a matter of design choice. There is no universally “best” map representation; it depends on specific requirements.

Secondly, in a scenario where virtually everything around the robot is unknown, how should it decide where to go next to gather information? Should it move continuously, processing information as it arrives? Alternatively, should it select specific goals and then process information as it navigates toward them? This decision, known as the exploration policy, is another key design choice.

In this work, we address both of these questions within a unified context. We consider the robot to be a ground robot, an Unmanned Ground Vehicle (UGV), and the unknown environment it explores is a natural environment, such as a park or a forest. Our chosen map representation is a familiar one in robotics: the 3D-grid, discretizing the entire volume into voxels. In the course of our work, we undertake various tasks and developments, all with the overarching aim of enabling an exploration policy that maximizes map quality during the exploration process. In what follows, we introduce the various challenges inherent to this objective, which, in turn, motivated this research.

Scan of the area with a Leica Total Station
Illustration of an experiment: the Leica Total Station.
Illustration of an experiment: the Husky robot

Map quality

The first challenge lies in evaluating the 3D-map quality in natural environments. Typically, map quality evaluation aims to provide a single metric that reflects the overall map quality. Nonetheless, in the context of a robot autonomously mapping a natural environment for inspection or monitoring purposes, it becomes interesting to obtain a localized measure of map quality. Indeed, the map quality may not necessarily be homogeneous throughout a possibly large-scale environment.

This research specifically focuses on this scenario, where the robot’s 3D-lidar observations construct the map. In this case, the prevailing map representation is the 3D grid, where each voxel encodes information. Traditionally, this 3D grid encodes the occupancy likelihood for each voxel. However, in this common scenario, the conventional measures of map quality, namely surface coverage and reconstruction accuracy, may not always hold significant meaning, especially when dealing with natural environments that are not only sparse but also unstructured. We will demonstrate it in this work, by emphasizing the specific case of mapping a sparse, unstructured environment, such as a natural environment, in simulation and with a real world experiment.

In the case of “unstructured” environments, distinct challenges arise compared to those encountered when mapping “structured” environments. The literature often focuses on 3D-mapping in “structured” environments like urban areas. However, when it comes to work in natural environments, the differentiation between structured and unstructured, dense and sparse environments becomes more prominent due to the unique challenges involved. The 3D-map in a natural environment is both unstructured and sparse. It consists predominantly of empty space, with only a few points where the 3D-lidar actually hits an object, further complicated by the increased noise level inherent to natural environments. Additionally, the localization of the robot in the map is also a source of errors because the 3D-map is a probabilistic accumulation of all the point-clouds transformed in the localization frame.

This research does not directly evaluate various mapping algorithms applied to natural environments. Instead, its emphasis lies in addressing the challenges associated with assessing the quality of 3D maps in such environments. In this work, we propose a methodology to evaluate the map quality at a local level. Then, we evaluate and compare several metrics, and propose to select one particular metric for its robustness in that type of environment.

This work is available through the following articles:

Stéphanie Aravecchia, Antoine Richard, Marianne Clausel, Cedric Pradalier. Measuring 3D-reconstruction quality in probabilistic volumetric maps with the Wasserstein Distance. 56th International Symposium on Robotics (ISR Europe), Sep 2023, Stuttgart, Germany.

Stéphanie Aravecchia, Marianne Clausel, Cedric Pradalier. Comparing Metrics for Evaluating 3D Map Quality in Natural Environment. Under-review, 2023.

Building a prior on map quality to guide the exploration

The second challenge is more directly linked to the exploration process. The primary objective of autonomous exploration is to answer the question “where to go next?” When a robot explores an unknown space, the complete volume is unknown at the beginning of the exploration. As the robot moves through the space, it gathers information and updates a map.

In robotics, autonomous exploration involves finding the next most interesting area to visit, essentially determining a goal destination. When the goal of exploration is to construct an accurate map, a central question arises: “in regions that have been explored, what is the current map quality?” Moreover, when working in natural environments, ground-truth data is often unavailable. This raises a second question: “how can we derive an exploration policy to improve the map quality without access to any reference map?” 

One solution to address both questions simultaneously is to predict the map quality at a local level from the robot’s observations. From this prediction, we can derive an exploration policy. This is the challenge we intend to address in this work.

To tackle this challenge, our work proposes several computationally efficient viewpoint statistics that offer insights into local map quality. These statistics highlight which areas are worth revisiting and how to do so. Conversely, they help identify areas that do not significantly improve map quality, allowing to discard them. Our work demonstrates statistically that these viewpoint statistics are presumptive of local map quality.

Furthermore, we integrate these viewpoint statistics into an exploration policy. An exploration policy is a balance between a cost, generally the cost of reaching a goal, and an expected gain, which represents the expected knowledge the robot will acquire upon reaching the goal. The best goal according to the policy is the Next-Best-View. 

In this work, we incorporate these statistics into the selection of the Next-Best-View. By doing so, we demonstrate that when the exploration policy formulates its information gain based on the viewpoint statistics, the overall map quality improves.

This work is available through the following article:

Stéphanie Aravecchia, Antoine Richard, Marianne Clausel, Cédric Pradalier. Next-Best-View selection from observation viewpoint statistics. International Conference on Intelligent Robots and Systems (IROS), IEEE, Oct 2023, Detroit, Michigan/USA, United States.

Distribution of the map quality against the viewpoint statistics. The first row corresponds to occupied space, the second row to empty space. Each column corresponds to a different viewpoint statistic. The line corresponds to the median value of the quality against the viewpoint statistic, the filled area corresponds to the IQR of the quality against the viewpoint statistic. This graph highlights the correlation between quality and our four viewpoint statistics.
Quality of reconstruction expressed as a function of the proportion of discovered space in the ground plane. The plots are arranged with an increasing difficulty in the scene. In each figure, the line is the mean of the experiments, the area is between the min and the max. Our proposed methods, where the next-best-view selection is done on the viewpoint statistics individually (nobs , rmin , nΩ , σθ) are compared to the baselines (random-frontier, closest-frontier, random-free). Generally, our curves are higher than the naive approaches, showing that our exploration policies based on viewpoint statistics lead to enhanced map quality during the exploration process.

© 2019 – 2023 DREAM Lab – Georgia Tech