论文标题
半监督学习的自我训练
Debiased Self-Training for Semi-Supervised Learning
论文作者
论文摘要
深度神经网络在大规模标记的数据集的帮助下,在各种任务上取得了出色的表现。然而,这些数据集既耗时又竭尽全力来获得现实的任务。为了减轻对标记数据的需求,通过迭代将伪标签分配给未标记的样本,自我培训被广泛用于半监督学习中。尽管它很受欢迎,但自我训练还是不可靠的,通常会导致训练不稳定。我们的实验研究进一步表明,半监督学习的偏见既来自问题本身,也来自不适当的训练,并具有可能不正确的伪标签,这在迭代自我训练过程中累积了错误。为了减少上述偏见,我们提出了自我训练(DST)。首先,伪标签的生成和利用由两个独立于参数的分类器头部解耦,以避免直接误差积累。其次,我们估计自我训练偏差的最坏情况,其中伪标记函数在标记的样品上是准确的,但在未标记的样本上却尽可能多的错误。然后,我们通过避免最坏的情况来优化表示形式,以提高伪标签的质量。广泛的实验证明,DST在标准半监督学习基准数据集上的最先进方法中,平均提高了6.3%,而在13个不同任务上,fixMatch的平均水平为18.9%。此外,DST可以无缝地适应其他自我训练方法,并有助于稳定他们在从头开始的培训和预先训练模型的训练的情况下,在班级中稳定训练和平衡性能。
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets and 18.9%$ against FixMatch on 13 diverse tasks. Furthermore, DST can be seamlessly adapted to other self-training methods and help stabilize their training and balance performance across classes in both cases of training from scratch and finetuning from pre-trained models.