论文标题
持续的batchnorm适应(CBNA)用于语义分割
Continual BatchNorm Adaptation (CBNA) for Semantic Segmentation
论文作者
论文摘要
自动驾驶车辆中的环境感知通常在很大程度上依赖于深层神经网络(DNN),这些神经网络受到域的转移,导致DNN部署期间的性能大大降低。通常,该问题是通过在源和目标域数据集上同时训练的无监督域名适应(UDA)方法,甚至仅以离线方式对目标数据进行训练。在这项工作中,我们进一步将无源的UDA方法扩展到了连续的,因此可以在单一图像的基础上进行语义细分。因此,我们的方法仅需要供应商(在源域中训练)和电流(未标记的目标域)相机图像的预训练模型。我们的方法持续的batchNorm适应(CBNA)使用目标域图像以无监督的方式修改了批准层中的源域统计数据,从而在推理过程中可以提高稳定的性能。因此,与现有作品相反,我们的方法可以应用于在部署过程中不断地以单图像为基础的DNN,而无需访问源数据,而无需算法延迟,并且几乎没有计算开销。我们在各种源/目标域设置中显示了我们方法在语义分割中的一致有效性。代码可在https://github.com/ifnspaml/cbna上找到。
Environment perception in autonomous driving vehicles often heavily relies on deep neural networks (DNNs), which are subject to domain shifts, leading to a significantly decreased performance during DNN deployment. Usually, this problem is addressed by unsupervised domain adaptation (UDA) approaches trained either simultaneously on source and target domain datasets or even source-free only on target data in an offline fashion. In this work, we further expand a source-free UDA approach to a continual and therefore online-capable UDA on a single-image basis for semantic segmentation. Accordingly, our method only requires the pre-trained model from the supplier (trained in the source domain) and the current (unlabeled target domain) camera image. Our method Continual BatchNorm Adaptation (CBNA) modifies the source domain statistics in the batch normalization layers, using target domain images in an unsupervised fashion, which yields consistent performance improvements during inference. Thereby, in contrast to existing works, our approach can be applied to improve a DNN continuously on a single-image basis during deployment without access to source data, without algorithmic delay, and nearly without computational overhead. We show the consistent effectiveness of our method across a wide variety of source/target domain settings for semantic segmentation. Code is available at https://github.com/ifnspaml/CBNA.