论文标题

低剂量CT Denoising的蒙面自动编码器

Masked Autoencoders for Low dose CT denoising

论文作者

Wang, Dayang, Xu, Yongshun, Han, Shuo, Yu, Hengyong

论文摘要

低剂量计算机断层扫描(LDCT)减少了X射线辐射,但使用更多的噪音和人工制品损害了图像质量。最近已经开发了大量的变压器模型,以提高LDCT图像质量。但是,变压器模型的成功取决于大量的配对嘈杂和干净的数据,这些数据在临床应用中通常不可用。在计算机视觉和自然语言处理场中,由于其出色的功能表示能力,已提出了蒙版的自动编码器(MAE)作为变压器的有效无标签的自预测方法。在这里,我们重新设计了经典的编码器学习模型,以匹配DeNoising任务并将其应用于LDCT DeNoising问题。当缺失地面真相数据时,MAE可以利用未标记的数据并促进LDCT DeNoising模型的结构保存。 Mayo数据集的实验验证了MAE可以提高变压器的降解性能并减轻对地面真相数据的依赖。

Low-dose computed tomography (LDCT) reduces the X-ray radiation but compromises image quality with more noises and artifacts. A plethora of transformer models have been developed recently to improve LDCT image quality. However, the success of a transformer model relies on a large amount of paired noisy and clean data, which is often unavailable in clinical applications. In computer vision and natural language processing fields, masked autoencoders (MAE) have been proposed as an effective label-free self-pretraining method for transformers, due to its excellent feature representation ability. Here, we redesign the classical encoder-decoder learning model to match the denoising task and apply it to LDCT denoising problem. The MAE can leverage the unlabeled data and facilitate structural preservation for the LDCT denoising model when ground truth data are missing. Experiments on the Mayo dataset validate that the MAE can boost the transformer's denoising performance and relieve the dependence on the ground truth data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源