PaperReading

Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection

November 2019

tl;dr: One of the first paper to introduce epistemic and aleatoric uncertainty in object detection.

Overall impression

This paper has a good level of details regarding how to adapt aleatoric and epistemic uncertainty to object detector.

Modeling aleatoric uncertainty boosted the performance (by 1-5%). Modeling epistemic uncertainty via monte carlo dropout degrades performance slightly. –> also observed by Bayesian yolov3.

This paper only models aleatoric uncertainty in FRH (faster rcnn head) part. This work is extended by towards safe ad2 by modeling uncertainty in both RPN and FRH.

Uncertainty can be used to efficiently improve the vehicle detector in an active learning paradigm: the detector actively queries the unseen samples with high epistemic uncertainty.

Key ideas

Technical details

Notes

Knowing what an object detection model is unsure about is of paramount importance for safe autonomous driving. Most Object detection can only tell the human drivers what they have seen, but not how certain they are about it. Detecting an abnormal object that is different from the training dataset may result in high epistemic uncertainty, while detecting a distant/occluded object may result in high aleatoric uncertainty.