论文标题
使用未标记的数据,控制器指导的部分标签一致性正规化
Controller-Guided Partial Label Consistency Regularization with Unlabeled Data
论文作者
论文摘要
部分标签学习(PLL)从每个与多个候选标签相关的培训示例中学习,其中只有一个是有效的。近年来,受益于应对模棱两可的监督的强大能力和现代数据增强方法的动力,基于一致性的PLL方法已经取得了一系列成功并成为主流。但是,随着部分注释变得不足,它们的性能大大下降。在本文中,我们利用易于访问的未标记示例来促进部分标签的一致性正规化。除了部分监督损失外,我们的方法还借助未标记的数据,在标签级别和表示级别上执行了控制器引导的一致性正则化。为了最大程度地减少初始监督模型功能不足的缺点,我们使用控制器来估计每个当前预测的置信度,以指导后续的一致性正则化。此外,我们动态调整置信度阈值,以便每个班级的样本参与一致性正则化的样本数量大致等于减轻班级不平衡问题。实验表明,我们的方法在更实际的情况下实现了令人满意的性能,并且可以将其模块应用于现有的PLL方法以增强其功能。
Partial label learning (PLL) learns from training examples each associated with multiple candidate labels, among which only one is valid. In recent years, benefiting from the strong capability of dealing with ambiguous supervision and the impetus of modern data augmentation methods, consistency regularization-based PLL methods have achieved a series of successes and become mainstream. However, as the partial annotation becomes insufficient, their performances drop significantly. In this paper, we leverage easily accessible unlabeled examples to facilitate the partial label consistency regularization. In addition to a partial supervised loss, our method performs a controller-guided consistency regularization at both the label-level and representation-level with the help of unlabeled data. To minimize the disadvantages of insufficient capabilities of the initial supervised model, we use the controller to estimate the confidence of each current prediction to guide the subsequent consistency regularization. Furthermore, we dynamically adjust the confidence thresholds so that the number of samples of each class participating in consistency regularization remains roughly equal to alleviate the problem of class-imbalance. Experiments show that our method achieves satisfactory performances in more practical situations, and its modules can be applied to existing PLL methods to enhance their capabilities.