论文标题
通过特征分离和门控融合的强大多模式脑肿瘤分割
Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion
论文作者
论文摘要
准确的医学图像分割通常需要从多模式数据中有效学习互补信息。但是,在临床实践中,我们经常遇到缺少成像方式的问题。我们应对这一挑战,并提出了一个新型的多模式分割框架,该框架对于缺乏成像方式是可靠的。我们的网络使用特征分开将输入模式分解为特定于模式的外观代码,该模式唯一地遵循每种模式,以及模态不变的内容代码,该代码吸收了分割任务的多模式信息。借助增强的模态不变性,每个模态的分离内容代码融合到共享表示形式中,从而获得了丢失数据的稳健性。融合是通过基于学习的策略来实现的,以在不同位置对不同方式的贡献进行贡献。我们使用Brats挑战数据集验证了重要但具有挑战性的多模式脑肿瘤分割任务的方法。在最先进的方法的竞争性能中,我们的方法在各种缺失的方式(IES)情况下实现了出色的鲁棒性,在整个肿瘤分段中,平均骰子的平均骰子平均超过16%。
Accurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over 16% in average for Dice on whole tumor segmentation.