论文标题
PriorLane:基于变压器的先验知识增强了车道检测方法
PriorLane: A Prior Knowledge Enhanced Lane Detection Approach Based on Transformer
论文作者
论文摘要
车道检测是自动驾驶中的基本模块之一。在本文中,我们采用了一种仅变压器的方法进行车道检测,因此,它可以通过对Culane和Tusimple基准测试的全视觉变压器的开发开发并实现最先进的(SOTA)性能,通过微调在大型数据集中完全预先培养的重量,从而在Culane和Tusimple基准测试中受益。更重要的是,本文提出了一个名为Priorlane的新颖和一般框架,该框架用于通过引入低成本的局部先验知识来增强完全视觉变压器的分割性能。具体而言,PriorLane利用仅编码变压器将预训练的分割模型与先验知识嵌入的特征融合在一起。请注意,知识嵌入对齐(KEA)模块可通过对齐知识嵌入来提高融合性能。我们的ZJLAB数据集上的广泛实验表明,当使用先验知识时,PriorLane以2.82%MIOU优于SOTA LANE检测方法,并且该代码将在以下位置发布:https://github.com/vincentqqqb/priorlalane。
Lane detection is one of the fundamental modules in self-driving. In this paper we employ a transformer-only method for lane detection, thus it could benefit from the blooming development of fully vision transformer and achieve the state-of-the-art (SOTA) performance on both CULane and TuSimple benchmarks, by fine-tuning the weight fully pre-trained on large datasets. More importantly, this paper proposes a novel and general framework called PriorLane, which is used to enhance the segmentation performance of the fully vision transformer by introducing the low-cost local prior knowledge. Specifically, PriorLane utilizes an encoder-only transformer to fuse the feature extracted by a pre-trained segmentation model with prior knowledge embeddings. Note that a Knowledge Embedding Alignment (KEA) module is adapted to enhance the fusion performance by aligning the knowledge embedding. Extensive experiments on our Zjlab dataset show that PriorLane outperforms SOTA lane detection methods by a 2.82% mIoU when prior knowledge is employed, and the code will be released at: https://github.com/vincentqqb/PriorLane.