论文标题

可解释的自我意识的神经网络,用于鲁棒轨迹预测

Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction

论文作者

Itkina, Masha, Kochenderfer, Mykel J.

论文摘要

尽管神经网络作为各种领域的预测模型取得了巨大的成功,但它们可能对他们对分布(OOD)数据的预测过于自信。为了使其对关键安全应用(例如自动驾驶汽车)可行,神经网络必须准确估计其认知或模型不确定性,从而达到系统的自我意识。认知不确定性量化的技术通常需要在训练期间或在推理期间多个神经网络前进的过程中进行OOD数据。这些方法可能不适合在高维输入上实时性能。此外,现有方法缺乏估计的不确定性的解释性,这限制了工程师的实用性,以进行进一步的系统开发和自主堆栈中的下游模块。我们建议在轨迹预测环境中使用证据深度学习来估计低维,可解释的潜在空间的认知不确定性。我们引入了一个可解释的轨迹预测范例,该预测在语义概念之间分布不确定性:过去的代理行为,道路结构和社会环境。我们验证了现实世界自动驾驶数据的方法,证明了优于最先进的基线的表现。我们的代码可在以下网址提供:https://github.com/sisl/interpretableselfabawareprediction。

Although neural networks have seen tremendous success as predictive models in a variety of domains, they can be overly confident in their predictions on out-of-distribution (OOD) data. To be viable for safety-critical applications, like autonomous vehicles, neural networks must accurately estimate their epistemic or model uncertainty, achieving a level of system self-awareness. Techniques for epistemic uncertainty quantification often require OOD data during training or multiple neural network forward passes during inference. These approaches may not be suitable for real-time performance on high-dimensional inputs. Furthermore, existing methods lack interpretability of the estimated uncertainty, which limits their usefulness both to engineers for further system development and to downstream modules in the autonomy stack. We propose the use of evidential deep learning to estimate the epistemic uncertainty over a low-dimensional, interpretable latent space in a trajectory prediction setting. We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among the semantic concepts: past agent behavior, road structure, and social context. We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines. Our code is available at: https://github.com/sisl/InterpretableSelfAwarePrediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源