Autonomous Navigation of an RC Car

The objective of this project is to experiment state of the art technique to drive autonomously an RC car.

The Undergraduate Research conducted by Kunal Sharma in Fall 2019 experimented Road Segmentation to determine the steering angle.

Segmentation obtained with MobileNetv2. This model was the fastest of our tests, but presented distortion close to the camera. This is not an issue to calculate the steering focal point.
Steering Angle determined with a simple algorithm, consisting of finding the first row from the top of the image that contains a certain percentage of pixels classified as road to determine the Y coordinate and taking the midpoint of the road pixels on this row to determine the X coordinate. The different thresholds in this image are the different percentages tested. The results are fairly similar.

The Special problem conducted by Simon Duval, Sifei Li, Vincent Verdier, and Jared Landgraf in Fall 2020 experimented deep learning for the autonomous navigation: road segmentation, object detection, and control.

Right: original image, left: segmented image with an encoder-decoder DeepLabV3+ - Xception-71. Black road, white not road.
Results from object detection. Top left - SSD inception v2. Top right R-FCN Resnet101. Bottom left - YOLO v3. Bottom right - SSD Mobilenet v2.

In the meantime, the Special problem conducted by Jackson Crandell, Martin Puig, Victor Galizzi and Théo Galizzi experimented traditional  techniques for the autonomous navigation: inverse perspective mapping (“bird’s eye view”), edge detection, Hough transform and Kalman Filter.

1: original image, 2: bird's eye view, 3: edge detection, 4: Hough transform, 5: Kalman filter

Both teams also experimented car control on an Airsim simulation.

 

 

 

© 2019 – 2023 DREAM Lab – Georgia Tech