论文标题

用于多目标自适应语义细分的合作自我训练

Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation

论文作者

Zhang, Yangsong, Roy, Subhankar, Lu, Hongtao, Ricci, Elisa, Lathuilière, Stéphane

论文摘要

在这项工作中,我们在语义分段中介绍了多目标域的适应性(MTDA),这包括将单个模型从带注释的源数据集适应多个未经注释的目标数据集,这些数据集在其基本数据分布方面有所不同。为了解决MTDA,我们提出了一种自我训练的策略,该策略采用伪标签来诱导多个特定领域的分类器之间的合作。我们采用特征风格化作为生成构成自我训练组成部分的图像视图的有效方法。此外,为了防止网络过度拟合到嘈杂的伪标签,我们制定了一种纠正策略,该策略利用不同分类器的预测来估计伪标签的质量。基于四个不同的语义分割数据集,我们对众多设置进行了广泛的实验,验证了提出的自我训练策略的有效性,并表明我们的方法表现优于最先进的MTDA方法。代码可用:https://github.com/mael-zys/coast

In this work we address multi-target domain adaptation (MTDA) in semantic segmentation, which consists in adapting a single model from an annotated source dataset to multiple unannotated target datasets that differ in their underlying data distributions. To address MTDA, we propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers. We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training. Additionally, to prevent the network from overfitting to noisy pseudo-labels, we devise a rectification strategy that leverages the predictions from different classifiers to estimate the quality of pseudo-labels. Our extensive experiments on numerous settings, based on four different semantic segmentation datasets, validate the effectiveness of the proposed self-training strategy and show that our method outperforms state-of-the-art MTDA approaches. Code available at: https://github.com/Mael-zys/CoaST

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源