论文标题
半监督语义分割的实例特定和模型自适应监督
Instance-specific and Model-adaptive Supervision for Semi-supervised Semantic Segmentation
论文作者
论文摘要
最近,半监督的语义分割通过一小部分标记数据实现了有希望的性能。但是,大多数现有研究都平等地对待所有未标记的数据,几乎不考虑未标记实例中的差异和培训困难。区分未标记的实例可以促进特定于实例的监督以动态地适应模型的演化。在本文中,我们强调实例差异的关键性,并提出针对半监督语义分割的特定实例和模型自适应监督,名为IMA。依靠模型的性能,IMA采用了类加权的对称交叉点来评估每个未标记的实例的定量硬度,并以模型自适应方式监督对未标记数据的培训。具体而言,IMA通过根据评估的硬度权衡其相应的一致性损失来逐步从未标记的实例中学习。此外,IMA会动态调整每个实例的增强,以使增强实例的失真程度适应了整个培训课程中模型的概括能力。 IMA不整合其他损失和培训程序,可以在不同的半监督分区协议下对当前的细分基准测试基准获得显着的性能提高。
Recently, semi-supervised semantic segmentation has achieved promising performance with a small fraction of labeled data. However, most existing studies treat all unlabeled data equally and barely consider the differences and training difficulties among unlabeled instances. Differentiating unlabeled instances can promote instance-specific supervision to adapt to the model's evolution dynamically. In this paper, we emphasize the cruciality of instance differences and propose an instance-specific and model-adaptive supervision for semi-supervised semantic segmentation, named iMAS. Relying on the model's performance, iMAS employs a class-weighted symmetric intersection-over-union to evaluate quantitative hardness of each unlabeled instance and supervises the training on unlabeled data in a model-adaptive manner. Specifically, iMAS learns from unlabeled instances progressively by weighing their corresponding consistency losses based on the evaluated hardness. Besides, iMAS dynamically adjusts the augmentation for each instance such that the distortion degree of augmented instances is adapted to the model's generalization capability across the training course. Not integrating additional losses and training procedures, iMAS can obtain remarkable performance gains against current state-of-the-art approaches on segmentation benchmarks under different semi-supervised partition protocols.