论文标题

通过对疾病特异性空间模式的对抗性学习的域自适应医学图像分割

Domain Adaptive Medical Image Segmentation via Adversarial Learning of Disease-Specific Spatial Patterns

论文作者

Li, Hongwei, Loehr, Timo, Sekuboyina, Anjany, Zhang, Jianguo, Wiestler, Benedikt, Menze, Bjoern

论文摘要

在医学成像中,多中心数据的异质性阻碍了基于深度学习的方法的适用性,并在在看不见的数据域中应用模型时会导致大量的性能降低,例如一个新的中心一个新的扫描仪。在本文中,我们提出了一个无监督的域自适应框架,用于在不使用新目标域的任何手动注释的情况下提高图像分割性能,而是通过对来自目标域的几个图像进行重新校准的网络。为了实现这一目标,我们通过拒绝不可能的分割模式并通过语义和边界信息隐式学习来实施体系结构对新数据的适应性,从而在对抗性优化中捕获疾病特定的空间模式。但是,适应过程需要持续的监视,因为我们不能假设目标域的地面掩模存在,我们建议使用两个新的指标来监视适应过程,以及以稳定的方式训练细分算法的策略。我们基于建立良好的2D和3D架构,并对三个跨中心脑病变细分任务进行广泛的实验,涉及多中心公共和内部数据集。我们证明,从目标域中的一些未标记图像上重新校准深网可以显着提高分割精度。

In medical imaging, the heterogeneity of multi-centre data impedes the applicability of deep learning-based methods and results in significant performance degradation when applying models in an unseen data domain, e.g. a new centreor a new scanner. In this paper, we propose an unsupervised domain adaptation framework for boosting image segmentation performance across multiple domains without using any manual annotations from the new target domains, but by re-calibrating the networks on few images from the target domain. To achieve this, we enforce architectures to be adaptive to new data by rejecting improbable segmentation patterns and implicitly learning through semantic and boundary information, thus to capture disease-specific spatial patterns in an adversarial optimization. The adaptation process needs continuous monitoring, however, as we cannot assume the presence of ground-truth masks for the target domain, we propose two new metrics to monitor the adaptation process, and strategies to train the segmentation algorithm in a stable fashion. We build upon well-established 2D and 3D architectures and perform extensive experiments on three cross-centre brain lesion segmentation tasks, involving multicentre public and in-house datasets. We demonstrate that recalibrating the deep networks on a few unlabeled images from the target domain improves the segmentation accuracy significantly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源