论文标题
一种神经常规微分方程模型,用于可视化多参数MRI基于胶质瘤的深层神经网络行为
A Neural Ordinary Differential Equation Model for Visualizing Deep Neural Network Behaviors in Multi-Parametric MRI based Glioma Segmentation
论文作者
论文摘要
目的:开发一个神经普通微分方程(ODE)模型,用于在多参数MRI(MP-MRI)基于多参数的神经胶质瘤分割过程中可视化深神经网络(DNN)行为,作为增强深度学习解释性的方法。方法:假设可以将深度特征提取可以建模为时空连续的过程,我们设计了一种新型的深度学习模型神经ode,其中深度特征提取由无明确表达的ode控制。 1)与DNN相互作用和2)分割形成后的MR图像的动力学可以在求解ODE后可视化。设计了累积贡献曲线(ACC),以定量评估DNN对最终分割结果的每种MRI的利用。使用具有4模式MP-MRI方案的369例神经胶质瘤患者证明了该模型:T1,对比增强T1(T1-CE),T2和Flair。训练了三个神经ode模型以段增强肿瘤(ET),肿瘤核(TC)和整个肿瘤(WT)。基于ACC分析,鉴定出具有DNN大量利用的主要MR模式。将DNN分割结果仅使用关键MR模式与使用所有4种MR模式的MR模式进行比较。结果:所有神经模型都按预期成功说明了图像动力学。 ACC分析将T1-CE确定为ET和TC分割中唯一的关键方式,而Flair和T2都是WT分割的关键方式。与使用所有4种MR模式的U-NET结果相比,ET(0.784-> 0.775),TC(0.760-> 0.758)的骰子系数和WT(0.841-> 0.837)使用关键模态仅具有最小的差异而没有显着性。结论:神经模型提供了一种新的工具,可以优化具有增强解释性的深度学习模型输入。提出的方法可以推广到其他与医学图像有关的深度学习应用程序。
Purpose: To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network (DNN) behavior during multi-parametric MRI (mp-MRI) based glioma segmentation as a method to enhance deep learning explainability. Methods: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we designed a novel deep learning model, neural ODE, in which deep feature extraction was governed by an ODE without explicit expression. The dynamics of 1) MR images after interactions with DNN and 2) segmentation formation can be visualized after solving ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate the utilization of each MRI by DNN towards the final segmentation results. The proposed neural ODE model was demonstrated using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT). The key MR modalities with significant utilization by DNN were identified based on ACC analysis. Segmentation results by DNN using only the key MR modalities were compared to the ones using all 4 MR modalities. Results: All neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all 4 MR modalities, Dice coefficient of ET (0.784->0.775), TC (0.760->0.758), and WT (0.841->0.837) using the key modalities only had minimal differences without significance. Conclusion: The neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep learning applications.