论文标题

Tridonet:用于CT金属伪像还原的三重域模型驱动网络

TriDoNet: A Triple Domain Model-driven Network for CT Metal Artifact Reduction

论文作者

Shi, Baoshun, Jiang, Ke, Zhang, Shaolei, Lian, Qiusheng, Qin, Yanwei

论文摘要

最近基于深度学习的方法已经实现了计算机断层扫描金属伪影(CTMAR)的有希望的性能。但是,他们中的大多数人都有两个局限性:(i)域知识并未完全嵌入网络培训中; (ii)金属伪像缺乏有效的表示模型。上述限制为进一步的性能改善留出了空间。在这些问题上,我们提出了一个新型的三个域模型驱动的CTMAR网络,称为Tridonet,其网络训练利用了三个领域知识,即对正弦图,CT图像和金属伪像域的知识。具体而言,为了探索金属伪像的非本地重复条纹模式,我们将其编码为具有自适应阈值的显式紧密框架稀疏表示模型。此外,我们设计了一种基于对比度学习的对比正则化(CR),以分别利用清洁的CT图像和金属影响的图像为正和负样本。实验结果表明,我们的Tridonet可以产生伪影还原的CT图像。

Recent deep learning-based methods have achieved promising performance for computed tomography metal artifact reduction (CTMAR). However, most of them suffer from two limitations: (i) the domain knowledge is not fully embedded into the network training; (ii) metal artifacts lack effective representation models. The aforementioned limitations leave room for further performance improvement. Against these issues, we propose a novel triple domain model-driven CTMAR network, termed as TriDoNet, whose network training exploits triple domain knowledge, i.e., the knowledge of the sinogram, CT image, and metal artifact domains. Specifically, to explore the non-local repetitive streaking patterns of metal artifacts, we encode them as an explicit tight frame sparse representation model with adaptive thresholds. Furthermore, we design a contrastive regularization (CR) built upon contrastive learning to exploit clean CT images and metal-affected images as positive and negative samples, respectively. Experimental results show that our TriDoNet can generate superior artifact-reduced CT images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源