论文标题

通过曲线建模重新思考有效的车道检测

Rethinking Efficient Lane Detection via Curve Modeling

论文作者

Feng, Zhengyang, Guo, Shaohua, Tan, Xin, Xu, Ke, Wang, Min, Ma, Lizhuang

论文摘要

本文提出了一种基于参数曲线的新方法,用于RGB图像中的车道检测。与基于最先进的分割和基于点检测的方法不同,通常需要启发式方法来解码预测或制定大量锚的方法,基于曲线的方法可以自然地学习整体车道表示。为了处理现有多项式曲线方法的优化困难,我们建议由于其易于计算,稳定性和高自由度转换程度而利用参数Bézier曲线。此外,我们提出了基于可变形的特征翻转融合,以利用车道在驾驶场景中的对称性。所提出的方法在流行的Llamas基准上取得了新的最新性能。它还可以在Tusimple和Culane数据集上达到有利的精度,同时保持低潜伏期(> 150 fps)和小型型号(<10m)。我们的方法可以用作新的基线,从而阐明了用于泳道检测的参数曲线建模。我们的模型和Pytorchautodrive的代码:自动驾驶感知的统一框架,可在以下网址提供:https://github.com/voldemortx/pytorch-auto-drive。

This paper presents a novel parametric curve-based method for lane detection in RGB images. Unlike state-of-the-art segmentation-based and point detection-based methods that typically require heuristics to either decode predictions or formulate a large sum of anchors, the curve-based methods can learn holistic lane representations naturally. To handle the optimization difficulties of existing polynomial curve methods, we propose to exploit the parametric Bézier curve due to its ease of computation, stability, and high freedom degrees of transformations. In addition, we propose the deformable convolution-based feature flip fusion, for exploiting the symmetry properties of lanes in driving scenes. The proposed method achieves a new state-of-the-art performance on the popular LLAMAS benchmark. It also achieves favorable accuracy on the TuSimple and CULane datasets, while retaining both low latency (> 150 FPS) and small model size (< 10M). Our method can serve as a new baseline, to shed the light on the parametric curves modeling for lane detection. Codes of our model and PytorchAutoDrive: a unified framework for self-driving perception, are available at: https://github.com/voldemortX/pytorch-auto-drive .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源