论文标题
AWADA:注意对象调整对象检测的适应
AWADA: Attention-Weighted Adversarial Domain Adaptation for Object Detection
论文作者
论文摘要
对象检测网络已经达到了令人印象深刻的性能水平,但是在特定应用程序中缺乏合适的数据通常会限制它在实践中。通常,使用其他数据源来支持培训任务。但是,在这些中,不同数据源之间的域间隙在深度学习中构成了挑战。基于GAN的图像到图像样式转移通常用于缩小域间隙,但不稳定并与对象检测任务脱钩。我们提出了Awada,这是一个注意力加权的对抗领域适应框架,用于在样式变换和检测任务之间创建反馈循环。通过从对象探测器建议中构造前景对象注意图,我们将转换集中在前景对象区域并稳定样式转移训练。在广泛的实验和消融研究中,我们表明AWADA在常用的基准中达到了最新的无监督域适应对象检测性能,以进行诸如合成,不利的天气和跨摄像机适应性等任务。
Object detection networks have reached an impressive performance level, yet a lack of suitable data in specific applications often limits it in practice. Typically, additional data sources are utilized to support the training task. In these, however, domain gaps between different data sources pose a challenge in deep learning. GAN-based image-to-image style-transfer is commonly applied to shrink the domain gap, but is unstable and decoupled from the object detection task. We propose AWADA, an Attention-Weighted Adversarial Domain Adaptation framework for creating a feedback loop between style-transformation and detection task. By constructing foreground object attention maps from object detector proposals, we focus the transformation on foreground object regions and stabilize style-transfer training. In extensive experiments and ablation studies, we show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.