论文标题
通过混合标签进行显着对象检测的弱监督学习框架
A Weakly Supervised Learning Framework for Salient Object Detection via Hybrid Labels
论文作者
论文摘要
完全监督的显着对象检测(SOD)方法取得了长足的进步,但是这种方法通常依赖大量的像素级注释,这些注释是耗时且耗时的。在本文中,我们专注于混合标签下的新的弱监督SOD任务,其中监督标签包括传统无监督方法生成的大量粗标签和少量的真实标签。为了解决此任务中标签噪声和数量不平衡问题的问题,我们设计了一个新的管道框架,具有三种复杂的培训策略。在模型框架方面,我们将任务分解为标签细化子任务和显着对象检测子任务,这些子任务相互配合并交替训练。具体而言,R-NET被设计为配备有指导和聚合机制的搅拌器的两际用途编码器模型(BGA),旨在纠正更可靠的伪标签的粗标签,而S-NET是由当前R-NET生成的PSEUDO标签可更换的SOD网络。请注意,我们只需要使用训练有素的S-NET进行测试。此外,为了确保网络培训的有效性和效率,我们设计了三种培训策略,包括替代迭代机制,小组智慧的增量机制和信誉验证机制。五个草皮基准的实验表明,我们的方法在定性和定量上都针对弱监督/无监督/无监督的方法实现了竞争性能。
Fully-supervised salient object detection (SOD) methods have made great progress, but such methods often rely on a large number of pixel-level annotations, which are time-consuming and labour-intensive. In this paper, we focus on a new weakly-supervised SOD task under hybrid labels, where the supervision labels include a large number of coarse labels generated by the traditional unsupervised method and a small number of real labels. To address the issues of label noise and quantity imbalance in this task, we design a new pipeline framework with three sophisticated training strategies. In terms of model framework, we decouple the task into label refinement sub-task and salient object detection sub-task, which cooperate with each other and train alternately. Specifically, the R-Net is designed as a two-stream encoder-decoder model equipped with Blender with Guidance and Aggregation Mechanisms (BGA), aiming to rectify the coarse labels for more reliable pseudo-labels, while the S-Net is a replaceable SOD network supervised by the pseudo labels generated by the current R-Net. Note that, we only need to use the trained S-Net for testing. Moreover, in order to guarantee the effectiveness and efficiency of network training, we design three training strategies, including alternate iteration mechanism, group-wise incremental mechanism, and credibility verification mechanism. Experiments on five SOD benchmarks show that our method achieves competitive performance against weakly-supervised/unsupervised methods both qualitatively and quantitatively.