论文标题

超快速结构意识到的深车道检测

Ultra Fast Structure-aware Deep Lane Detection

论文作者

Qin, Zequn, Wang, Huanyu, Li, Xi

论文摘要

现代方法主要将车道检测视为像素细分的问题,该问题正在努力解决具有挑战性的场景和速度的问题。受到人类感知的启发,在严重的阻塞和极端照明条件下对车道的认识主要基于上下文和全球信息。在这一观察结果的推动下,我们提出了一种新颖,简单而有效的配方,旨在以极快的速度和具有挑战性的情况。具体而言,我们将泳道检测过程视为使用全局功能的基于行的选择问题。借助基于行的选择,我们的配方可以大大降低计算成本。使用全球功能的大型接收领域,我们还可以处理具有挑战性的方案。此外,根据配方,我们还提出了结构性损失,以明确对车道结构进行建模。在两个车道检测基准数据集上进行的广泛实验表明,我们的方法可以在速度和准确性方面达到最新的性能。轻巧的版本甚至可以通过相同的分辨率获得每秒300帧以上的帧,该帧至少比以前的最新方法快4倍。我们的代码将公开可用。

Modern methods mainly regard lane detection as a problem of pixel-wise segmentation, which is struggling to address the problem of challenging scenarios and speed. Inspired by human perception, the recognition of lanes under severe occlusion and extreme lighting conditions is mainly based on contextual and global information. Motivated by this observation, we propose a novel, simple, yet effective formulation aiming at extremely fast speed and challenging scenarios. Specifically, we treat the process of lane detection as a row-based selecting problem using global features. With the help of row-based selecting, our formulation could significantly reduce the computational cost. Using a large receptive field on global features, we could also handle the challenging scenarios. Moreover, based on the formulation, we also propose a structural loss to explicitly model the structure of lanes. Extensive experiments on two lane detection benchmark datasets show that our method could achieve the state-of-the-art performance in terms of both speed and accuracy. A light-weight version could even achieve 300+ frames per second with the same resolution, which is at least 4x faster than previous state-of-the-art methods. Our code will be made publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源