论文标题

先验解剖学知识与心脏磁共振成像中的自我监督对比学习的相互作用

Interaction of a priori Anatomic Knowledge with Self-Supervised Contrastive Learning in Cardiac Magnetic Resonance Imaging

论文作者

Nakashima, Makiya, Jang, Inyeop, Basnet, Ramesh, Benovoy, Mitchel, Tang, W. H. Wilson, Nguyen, Christopher, Kwon, Deborah, Hwang, Tae Hyun, Chen, David

论文摘要

由于少量的专家生成的标签和数据源的固有复杂性,因此对心脏磁共振成像(CMR)的深度学习模型(CMR)可能是一个挑战。最近已证明自我监督的对比学习(SSCL)可以提高多个医学成像任务的性能。但是,目前尚不清楚预训练的表示与周围的虚假组织相比,有多少反映了感兴趣的主要器官。在这项工作中,我们评估了将解剖学的先验知识纳入SSCL培训范式的最佳方法。具体来说,我们使用分割网络评估CMR图像中的心脏局部局部局部,然后在多个诊断任务中进行SSCL预处理。我们发现,使用解剖学的先验知识可以大大改善下游诊断性能。此外,与端到端培训和ImageNet预训练的网络相比,SSCL预先训练与内域数据通常改善了下游性能和更大的显着性。但是,将解剖学知识引入预训练通常不会产生重大影响。

Training deep learning models on cardiac magnetic resonance imaging (CMR) can be a challenge due to the small amount of expert generated labels and inherent complexity of data source. Self-supervised contrastive learning (SSCL) has recently been shown to boost performance in several medical imaging tasks. However, it is unclear how much the pre-trained representation reflects the primary organ of interest compared to spurious surrounding tissue. In this work, we evaluate the optimal method of incorporating prior knowledge of anatomy into a SSCL training paradigm. Specifically, we evaluate using a segmentation network to explicitly local the heart in CMR images, followed by SSCL pretraining in multiple diagnostic tasks. We find that using a priori knowledge of anatomy can greatly improve the downstream diagnostic performance. Furthermore, SSCL pre-training with in-domain data generally improved downstream performance and more human-like saliency compared to end-to-end training and ImageNet pre-trained networks. However, introducing anatomic knowledge to pre-training generally does not have significant impact.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源