论文标题
自循环不确定性:半监督医学图像分割的新型伪标签
Self-Loop Uncertainty: A Novel Pseudo-Label for Semi-Supervised Medical Image Segmentation
论文作者
论文摘要
见证了深度学习神经网络在自然图像处理中的成功,已经提出了越来越多的研究来开发基于深度学习的框架以进行医学图像分割。但是,由于对医学图像的像素的注释是费力且昂贵的,因此注释数据的数量通常不足以良好地培训神经网络。在本文中,我们提出了一种半监督的方法,以训练具有有限的标记数据和大量未标记图像的神经网络进行医学图像分割。一种新颖的伪标签(即自循环不确定性),通过反复通过自我监督任务优化神经网络而产生,被用作未标记的图像的基础真实,以增强训练集并提高段精度。提出的自循环不确定性可以看作是通过结合多个模型并显着减少推理时间来产生的不确定性估计的近似值。两个公开可用数据集的实验结果证明了我们半渗透方法的有效性。
Witnessing the success of deep learning neural networks in natural image processing, an increasing number of studies have been proposed to develop deep-learning-based frameworks for medical image segmentation. However, since the pixel-wise annotation of medical images is laborious and expensive, the amount of annotated data is usually deficient to well-train a neural network. In this paper, we propose a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation. A novel pseudo-label (namely self-loop uncertainty), generated by recurrently optimizing the neural network with a self-supervised task, is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy. The proposed self-loop uncertainty can be seen as an approximation of the uncertainty estimation yielded by ensembling multiple models with a significant reduction of inference time. Experimental results on two publicly available datasets demonstrate the effectiveness of our semi-supervied approach.