Focused Inspection using Imitation Learning

This project aims at learning the utility function that drives a human expert during an inspection task. Our main contribution resides in that we learn the utility function from sparse expert examples and frame the problem as a saliency problem instead of the classic markov decision process.

Autonomous exploration, environment monitoring, inspection and many other robotic tasks can be cast as the optimization of a utility function or cost-map over the robot environment. The results of this optimization are the optimal locations which fulfill the task requirements under given constraints such as minimizing energy consumption. Defining these functions is an important part of solving the task as the function embeds the robot goal and the world constraints. It is not unique for a class of task and must be specified for each task instance. Instead we learn to generate utility function from expert demonstration on these tasks.

Figure 1: Generic method

Figure 1 illustrates the method on a dummy task. The task environment is rasterized into a 2D map and the expert provides the task solutions as a set of positions on the map. These are rasterized by drawing 2D gaussians around the expert solutions. Given a set of demonstrations, a Fully Convolutional Network (FCN) is trained to generate utility functions which maxima correspond to the expert solution.

This method is applied to two exploration toy cases, the art gallery problem and the fortress problem, and UAV network deployment.

Figure 2: Fortress problem. Left-Right: rasterized polygon, expert solution, rasterized solution, network output (2 hits, 1 miss)

Figure 3: UAV network deployment. The task is rasterized into a density map. Optimal solutions are the locations that cover the most people with a limited number of drones.

© 2019 – 2023 DREAM Lab – Georgia Tech