论文标题

多视图动态融合框架:如何从多视图中改善多模式的脑肿瘤分割?

A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal Brain Tumor Segmentation from Multi-Views?

论文作者

Ding, Yi, Zheng, Wei, Wu, Guozheng, Geng, Ji, Cao, Mingsheng, Qin, Zhiguang

论文摘要

在诊断脑肿瘤时,医生通常通过分别从轴向视图,冠状视图和矢状视图观察多模式的脑图像来诊断。然后,他们做出了一个全面的决定,根据从多视图获得的信息确认脑肿瘤。受到此诊断过程的启发,为了进一步利用数据集中隐藏的3D信息,本文提出了一个多视图动态融合框架,以提高脑肿瘤分割的性能。所提出的框架由1)多视图深度神经网络体系结构组成,它代表了从不同视图中分割脑肿瘤的多学习网络,每个深度神经网络对应于一种单一视图和动态决策融合方法的多模式脑图像,该方法主要用于融合多种观察方法,从而将多种视图融合为一个不同的融合方法,并将两种融合方法和融合方法融合,并且是融合方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且是融合的方法,并且融合了融合的方法,并且是融合的方法。采用用于评估融合过程。此外,提出了由分割损失,过渡损失和决策损失组成的多视图融合损失,以促进多视图学习网络的训练过程,以保持外观和空间的一致性,不仅是在融合细分结果的过程中,而且在培训学习网络的过程中。通过评估Brats 2015和Brats 2018的拟议框架,可以发现,多视图所产生的融合比单一视图所产生的分割结果更好,并且还证明了拟议的多视图融合损失的有效性。此外,与其他对应方法相比,所提出的框架可实现更好的细分性能和更高的效率。

When diagnosing the brain tumor, doctors usually make a diagnosis by observing multimodal brain images from the axial view, the coronal view and the sagittal view, respectively. And then they make a comprehensive decision to confirm the brain tumor based on the information obtained from multi-views. Inspired by this diagnosing process and in order to further utilize the 3D information hidden in the dataset, this paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation. The proposed framework consists of 1) a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view and 2) the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views as an integrate one and two different fusion methods, the voting method and the weighted averaging method, have been adopted to evaluate the fusing process. Moreover, the multi-view fusion loss, which consists of the segmentation loss, the transition loss and the decision loss, is proposed to facilitate the training process of multi-view learning networks so as to keep the consistency of appearance and space, not only in the process of fusing segmentation results, but also in the process of training the learning network. \par By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view and the effectiveness of proposed multi-view fusion loss has also been proved. Moreover, the proposed framework achieves a better segmentation performance and a higher efficiency compared to other counterpart methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源