论文标题
通过不确定性吸引伪标签学习,无监督的域自适应显着对象检测
Unsupervised Domain Adaptive Salient Object Detection Through Uncertainty-Aware Pseudo-Label Learning
论文作者
论文摘要
深度学习的最新进展显着提高了显着对象检测(SOD)的性能,而牺牲了标签更大规模的人均注释。为了减轻劳动密集型标签的负担,已经提出了深层无监督的SOD方法来利用手工制作的显着性方法产生的嘈杂标签。但是,从粗糙的嘈杂标签中学习准确的显着性细节仍然很难。在本文中,我们建议从合成但干净的标签中学习显着性,在无需手动注释的情况下,它们自然具有更高的像素标记质量。具体而言,我们首先通过简单的复制策略构建了一种新型的合成SOD数据集。考虑到合成和现实世界情景之间的较大外观差异,直接使用合成数据的训练将导致现实情况下的性能下降。为了减轻这个问题,我们提出了一种新型的无监督域自适应SOD方法,以通过不确定的自我训练来适应这两个域之间。实验结果表明,我们所提出的方法的表现优于几个基准数据集上现有的最新无监督的SOD方法,甚至与完全监督的数据集相当。
Recent advances in deep learning significantly boost the performance of salient object detection (SOD) at the expense of labeling larger-scale per-pixel annotations. To relieve the burden of labor-intensive labeling, deep unsupervised SOD methods have been proposed to exploit noisy labels generated by handcrafted saliency methods. However, it is still difficult to learn accurate saliency details from rough noisy labels. In this paper, we propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations. Specifically, we first construct a novel synthetic SOD dataset by a simple copy-paste strategy. Considering the large appearance differences between the synthetic and real-world scenarios, directly training with synthetic data will lead to performance degradation on real-world scenarios. To mitigate this problem, we propose a novel unsupervised domain adaptive SOD method to adapt between these two domains by uncertainty-aware self-training. Experimental results show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets, and is even comparable to fully-supervised ones.