论文标题

通过协议打击嘈杂的标签:一种共同指导的联合培训方法

Combating noisy labels by agreement: A joint training method with co-regularization

论文作者

Wei, Hongxin, Feng, Lei, Chen, Xiangyu, An, Bo

论文摘要

用嘈杂的标签进行深度学习是一个弱监督学习的挑战性问题。最先进的方法“解耦”和“共同教学+”声称“分歧”策略对于减轻使用嘈杂标签的学习问题至关重要。在本文中,我们从不同的角度开始,并提出了一个名为Jocor的强大学习范式,该范式旨在减少培训期间两个网络的多样性。具体而言,我们首先使用两个网络对相同的迷你批次数据进行预测,并计算每个训练示例的共同损失。然后,我们选择小损失示例以同时更新两个网络的参数。经联合损失训练,由于共同化的影响,这两个网络将越来越相似。从MNIST,CIFAR-10,CIFAR-100和CLATHING1M在内的基准数据集的损坏数据的广泛实验结果表明,Jocor优于许多使用嘈杂标签的最先进的学习方法。

Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源