论文标题

不变的内容协同学习用于域的医学图像分割的域名

Invariant Content Synergistic Learning for Domain Generalization of Medical Image Segmentation

论文作者

Kang, Yuxin, Li, Hansheng, Zhao, Xuan, Hu, Dongqing, Liu, Feihong, Cui, Lei, Feng, Jun, Yang, Lin

论文摘要

在对医学图像细分方面取得了显着的成功,但深度卷积神经网络(DCNN)在与新分布面对测试数据时通常无法保持其稳健性。为了解决这样的缺点,最近认可了DCNN的归纳偏差。具体而言,DCNNS表现出对图像样式(例如表面纹理)而不是不变内容(例如对象形状)的感应偏见。在本文中,我们提出了一种称为不变内容协同学习(ICSL)的方法,以通过控制归电偏见来提高DCNN在看不见的数据集上的概括能力。首先,ICSL将训练实例的风格混合在一起,以扰乱培训分配。也就是说,更多样化的域或样式将用于培训DCNNS。基于扰动的分布,我们仔细设计了双分支不变的内容协同学习策略,以防止风格偏见的预测,并更多地关注不变内容。对两个典型的医学图像分割任务的广泛实验结果表明,我们的方法的性能比最新的域概括方法更好。

While achieving remarkable success for medical image segmentation, deep convolution neural networks (DCNNs) often fail to maintain their robustness when confronting test data with the novel distribution. To address such a drawback, the inductive bias of DCNNs is recently well-recognized. Specifically, DCNNs exhibit an inductive bias towards image style (e.g., superficial texture) rather than invariant content (e.g., object shapes). In this paper, we propose a method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs on unseen datasets by controlling the inductive bias. First, ICSL mixes the style of training instances to perturb the training distribution. That is to say, more diverse domains or styles would be made available for training DCNNs. Based on the perturbed distribution, we carefully design a dual-branches invariant content synergistic learning strategy to prevent style-biased predictions and focus more on the invariant content. Extensive experimental results on two typical medical image segmentation tasks show that our approach performs better than state-of-the-art domain generalization methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源