论文标题

基于多头注意的自动驾驶的轨迹预测与关节代理图表示

Trajectory Prediction for Autonomous Driving based on Multi-Head Attention with Joint Agent-Map Representation

论文作者

Messaoud, Kaouther, Deo, Nachiket, Trivedi, Mohan M., Nashashibi, Fawzi

论文摘要

预测周围剂的轨迹是自动驾驶汽车在复杂的交通场景中导航的重要能力。可以使用两个重要的提示来推断代理的未来轨迹:代理的位置和过去运动以及静态场景结构。由于场景结构和代理配置的差异很高,因此先前的工作采用了注意机制,分别应用于场景和代理配置,以学习两个提示中最显着的部分。但是,这两个提示紧密相连。代理配置可以告知场景的哪个部分与预测最相关。静态场景反过来可以帮助确定代理人对彼此运动的相对影响。此外,未来轨迹的分布是多模式的,模式与代理的意图相对应。代理商的意图还告知场景的哪些部分和代理配置与预测有关。因此,我们提出了一种新颖的方法,通过考虑静态场景和周围代理的共同表示,以应用多头关注。我们使用每个注意力头来产生一个不同的未来轨迹,以解决未来轨迹的多模态。我们的模型在Nuscenes预测基准上实现了最新的结果,并生成了符合场景结构和代理配置的不同未来轨迹。

Predicting the trajectories of surrounding agents is an essential ability for autonomous vehicles navigating through complex traffic scenes. The future trajectories of agents can be inferred using two important cues: the locations and past motion of agents, and the static scene structure. Due to the high variability in scene structure and agent configurations, prior work has employed the attention mechanism, applied separately to the scene and agent configuration to learn the most salient parts of both cues. However, the two cues are tightly linked. The agent configuration can inform what part of the scene is most relevant to prediction. The static scene in turn can help determine the relative influence of agents on each other's motion. Moreover, the distribution of future trajectories is multimodal, with modes corresponding to the agent's intent. The agent's intent also informs what part of the scene and agent configuration is relevant to prediction. We thus propose a novel approach applying multi-head attention by considering a joint representation of the static scene and surrounding agents. We use each attention head to generate a distinct future trajectory to address multimodality of future trajectories. Our model achieves state of the art results on the nuScenes prediction benchmark and generates diverse future trajectories compliant with scene structure and agent configuration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源