论文标题

域自适应知识蒸馏驾驶现场语义细分

Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation

论文作者

Kothandaraman, Divya, Nambiar, Athira, Mittal, Anurag

论文摘要

实际的自主驾驶系统面临两个至关重要的挑战:记忆限制和域间隙问题。在本文中,我们提出了一种新颖的方法,可以在记忆力有限的模型中学习域自适应知识,从而使模型具有全面处理这些问题的能力。我们将其称为“域自适应知识蒸馏”,并通过提出多级蒸馏策略来有效地在不同级别上进行有效蒸馏式知识,从而在无监督的领域自适应语义分割的背景下解决。此外,我们介绍了一种新颖的交叉熵损失,该损失利用了老师的伪标签。这些伪老师标签对以下方面发挥了多方面的作用:(i)从教师网络到学生网络的知识蒸馏&(ii)是目标域图像的地面真相的代理,其中问题完全不可忽视。我们介绍了四个范式来提炼域自适应知识,并就实际到现实以及综合到现实的场景进行大量实验和消融研究。我们的实验证明了我们提出的方法取得了深远的成功。

Practical autonomous driving systems face two crucial challenges: memory constraints and domain gap issues. In this paper, we present a novel approach to learn domain adaptive knowledge in models with limited memory, thus bestowing the model with the ability to deal with these issues in a comprehensive manner. We term this as "Domain Adaptive Knowledge Distillation" and address the same in the context of unsupervised domain-adaptive semantic segmentation by proposing a multi-level distillation strategy to effectively distil knowledge at different levels. Further, we introduce a novel cross entropy loss that leverages pseudo labels from the teacher. These pseudo teacher labels play a multifaceted role towards: (i) knowledge distillation from the teacher network to the student network & (ii) serving as a proxy for the ground truth for target domain images, where the problem is completely unsupervised. We introduce four paradigms for distilling domain adaptive knowledge and carry out extensive experiments and ablation studies on real-to-real as well as synthetic-to-real scenarios. Our experiments demonstrate the profound success of our proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源