论文标题

自动驾驶协作检测的不确定性量化

Uncertainty Quantification of Collaborative Detection for Self-Driving

论文作者

Su, Sanbao, Li, Yiming, He, Sihong, Han, Songyang, Feng, Chen, Ding, Caiwen, Miao, Fei

论文摘要

共享连接和自动驾驶汽车(CAV)之间的信息从根本上改善了自动驾驶的协作对象检测的性能。但是,由于实际挑战,骑士仍然存在不确定性的对象检测,这将影响自动驾驶中的后来模块,例如计划和控制。因此,不确定性定量对于诸如CAV等安全 - 关键系统至关重要。我们的工作是第一个估计协作对象检测的不确定性的工作。我们提出了一种新型的不确定性量化方法,称为Double-M量化,该方法通过直接建模到边界框的每个角落的多变量高斯分布来定制移动块引导(MBB)算法。我们的方法基于离线双M训练过程,通过一个推理通过了一个推理,同时捕获了认知的不确定性和差异不确定性。它可以与不同的协作对象检测器一起使用。通过对综合协作感知数据集进行的实验,我们表明,与最先进的不确定性量化方法相比,我们的双M方法在不确定性评分和3%的准确度上提高了4倍以上。我们的代码在https://coperception.github.io/double-m-quantification上公开。

Sharing information between connected and autonomous vehicles (CAVs) fundamentally improves the performance of collaborative object detection for self-driving. However, CAVs still have uncertainties on object detection due to practical challenges, which will affect the later modules in self-driving such as planning and control. Hence, uncertainty quantification is crucial for safety-critical systems such as CAVs. Our work is the first to estimate the uncertainty of collaborative object detection. We propose a novel uncertainty quantification method, called Double-M Quantification, which tailors a moving block bootstrap (MBB) algorithm with direct modeling of the multivariant Gaussian distribution of each corner of the bounding box. Our method captures both the epistemic uncertainty and aleatoric uncertainty with one inference pass based on the offline Double-M training process. And it can be used with different collaborative object detectors. Through experiments on the comprehensive collaborative perception dataset, we show that our Double-M method achieves more than 4X improvement on uncertainty score and more than 3% accuracy improvement, compared with the state-of-the-art uncertainty quantification methods. Our code is public on https://coperception.github.io/double-m-quantification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源