论文标题

使用卷积神经网络进行运动校正的临床上可翻译的直接Patlak重建

Clinically Translatable Direct Patlak Reconstruction from Dynamic PET with Motion Correction Using Convolutional Neural Network

论文作者

Xie, Nuobei, Gong, Kuang, Guo, Ning, Qin, Zhixing, Cui, Jianan, Wu, Zhifang, Liu, Huafeng, Li, Quanzheng

论文摘要

Patlak模型被广泛用于18F-FDG动态正电子发射断层扫描(PET)成像,其中估计的参数图像揭示了重要的生化和生理信息。由于更好的噪声建模和从原始辛图中提取的更多信息,直接patlak重建在间接方法上获得了其流行度,该方法仅利用重建的动态宠物图像。作为直接patlak方法的先决条件,动态宠物的原始数据很少存储在诊所中,难以获得。此外,由于多框架重建的瓶颈,直接重建是耗时的。所有这些都阻碍了直接Patlak重建的临床采用。在这项工作中,我们提出了一个数据驱动的框架,该框架将动态PET图像映射到高质量的运动校正的直接patlak图像通过卷积神经网络。对于长期动态PET扫描的患者运动,我们将校正与直接重建中的向后/正向投影相结合,以更好地拟合统计模型。基于15个临床18F-FDG动态脑PET数据集的结果证明了所提出的框架优于高斯,非本地平均值和BM4D DeNoising,该框架在图像偏置和对比度与噪声比率方面具有优势。

Patlak model is widely used in 18F-FDG dynamic positron emission tomography (PET) imaging, where the estimated parametric images reveal important biochemical and physiology information. Because of better noise modeling and more information extracted from raw sinogram, direct Patlak reconstruction gains its popularity over the indirect approach which utilizes reconstructed dynamic PET images alone. As the prerequisite of direct Patlak methods, raw data from dynamic PET are rarely stored in clinics and difficult to obtain. In addition, the direct reconstruction is time-consuming due to the bottleneck of multiple-frame reconstruction. All of these impede the clinical adoption of direct Patlak reconstruction.In this work, we proposed a data-driven framework which maps the dynamic PET images to the high-quality motion-corrected direct Patlak images through a convolutional neural network. For the patient motion during the long period of dynamic PET scan, we combined the correction with the backward/forward projection in direct reconstruction to better fit the statistical model. Results based on fifteen clinical 18F-FDG dynamic brain PET datasets demonstrates the superiority of the proposed framework over Gaussian, nonlocal mean and BM4D denoising, regarding the image bias and contrast-to-noise ratio.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源