论文标题
信心引导的无监督域适应小脑分割
Confidence-Guided Unsupervised Domain Adaptation for Cerebellum Segmentation
论文作者
论文摘要
小脑缺乏全面的高分辨率地图集,阻碍了小脑参与正常脑功能和疾病的研究。小脑皮层紧密叶状方面的良好表示很难实现,因为表面高度高,及其对手动描述所需的时间。手动细分的质量受人类专家判断的影响,自动标记受现有分割算法的鲁棒性有限的限制。与磁共振成像提供的1000UM(1mm)分辨率相比,20umisotropic Bigbrain数据集为语义分割提供了前所未有的高分辨率框架。为了消除手动注释要求,我们建议训练模型,以无处不在的方式将注释从艾伦脑大脑大脑的小脑转移到大桥,并考虑到各节之间的不同染色和间隔。艾伦大脑和大脑之间的明显视觉差异阻止了现有的方法,以提供有意义的分割面罩,以及在Bigbrain数据中由分段和组织学切片制剂引起的伪影提出了额外的挑战。为了解决这些问题,我们提出了一个两阶段的框架,在该框架中,我们首先将艾伦脑小脑转移到与Bigbrain的视觉相似性的空间。然后,我们使用置信度图引入了一种自我训练策略,以指导模型从嘈杂的伪标签迭代地学习。定性结果验证了我们方法的有效性,并且定量实验表明,与其他方法相比,我们的方法可以实现超过2.6%的损失。
The lack of a comprehensive high-resolution atlas of the cerebellum has hampered studies of cerebellar involvement in normal brain function and disease. A good representation of the tightly foliated aspect of the cerebellar cortex is difficult to achieve because of the highly convoluted surface and the time it would take for manual delineation. The quality of manual segmentation is influenced by human expert judgment, and automatic labelling is constrained by the limited robustness of existing segmentation algorithms. The 20umisotropic BigBrain dataset provides an unprecedented high resolution framework for semantic segmentation compared to the 1000um(1mm) resolution afforded by magnetic resonance imaging. To dispense with the manual annotation requirement, we propose to train a model to adaptively transfer the annotation from the cerebellum on the Allen Brain Human Brain Atlas to the BigBrain in an unsupervised manner, taking into account the different staining and spacing between sections. The distinct visual discrepancy between the Allen Brain and BigBrain prevents existing approaches to provide meaningful segmentation masks, and artifacts caused by sectioning and histological slice preparation in the BigBrain data pose an extra challenge. To address these problems, we propose a two-stage framework where we first transfer the Allen Brain cerebellum to a space sharing visual similarity with the BigBrain. We then introduce a self-training strategy with a confidence map to guide the model learning from the noisy pseudo labels iteratively. Qualitative results validate the effectiveness of our approach, and quantitative experiments reveal that our method can achieve over 2.6% loss reduction compared with other approaches.