论文标题

在联合MRI和PET/CT数据中,软组织肉瘤共系数

Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data

论文作者

Neubauer, Theresa, Wimmer, Maria, Berg, Astrid, Major, David, Lenis, Dimitrios, Beyer, Thomas, Saponjski, Jelena, Bühler, Katja

论文摘要

多模式医学图像中的肿瘤分割已经看到了基于深度学习方法的增长趋势。通常,研究该主题的研究融合了多模式图像数据,以改善单个成像方式的肿瘤分割轮廓。但是,他们没有考虑到每种方式都以不同的方式强调肿瘤特征,从而影响肿瘤描述。因此,肿瘤分割是模态和任务依赖性的。软组织肉瘤尤其是这种情况,由于坏死性肿瘤组织,分割差异很大。缩小了这一差距,我们开发了一种模态特异性的肉瘤分割模型,该模型利用多模式图像数据来改善每种单个模态的肿瘤描述。我们提出了一种同时进行的共进行分割方法,该方法可以通过模态特定的编码器和解码器分支来实现多模式特征学习,并使用资源效率密度连接的卷积层的使用。我们进一步进行实验,以分析不同的输入方式和编码器融合策略如何影响分割结果。我们证明了方法对公共软组织肉瘤数据的有效性,该数据包括MRI(T1和T2序列)和PET/CT扫描。结果表明,与仅使用PET或MRI(T1和T2)扫描作为输入的模型相比,我们的多模式共系模型提供了更好的模态肿瘤分割。

Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods. Typically, studies dealing with this topic fuse multimodal image data to improve the tumor segmentation contour for a single imaging modality. However, they do not take into account that tumor characteristics are emphasized differently by each modality, which affects the tumor delineation. Thus, the tumor segmentation is modality- and task-dependent. This is especially the case for soft tissue sarcomas, where, due to necrotic tumor tissue, the segmentation differs vastly. Closing this gap, we develop a modalityspecific sarcoma segmentation model that utilizes multimodal image data to improve the tumor delineation on each individual modality. We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches, and the use of resource-effcient densely connected convolutional layers. We further conduct experiments to analyze how different input modalities and encoder-decoder fusion strategies affect the segmentation result. We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans. The results show that our multimodal co-segmentation model provides better modality-specific tumor segmentation than models using only the PET or MRI (T1 and T2) scan as input.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源