This project aims at learning the utility function that drives a human expert during an inspection task. Our main contribution resides in that we learn the utility function from sparse expert examples and frame the problem as a saliency problem instead of the classic markov decision process.
Autonomous exploration, environment monitoring, inspection and many other robotic tasks can be cast as the optimization of a utility function or cost-map over the robot environment. The results of this optimization are the optimal locations which fulfill the task requirements under given constraints such as minimizing energy consumption. Defining these functions is an important part of solving the task as the function embeds the robot goal and the world constraints. It is not unique for a class of task and must be specified for each task instance. Instead we learn to generate utility function from expert demonstration on these tasks.