论文标题

逐渐实用的深层生成模型,用于对MR图像恢复的数据有效上下文学习

Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery

论文作者

Yurt, Mahmut, Özbey, Muzaffer, Dar, Salman Ul Hassan, Tınaz, Berk, Oğuz, Kader Karlı, Çukur, Tolga

论文摘要

磁共振成像(MRI)具有在多种组织对比下给定解剖体积成像的灵活性。然而,扫描时间注意事项对MRI数据的质量和多样性造成了严格的限制。减轻这种限制的金标准方法是从跨各个维度的数据中恢复高质量的图像,最常见的是傅立叶域或对比集。恢复方法的主要区别是解剖结构是每个体积还是每个横截面的处理。体积模型可增强对全球上下文信息的捕获,但由于模型复杂性的提升,它们可能会遭受次优学习的困扰。复杂性较低的横截面模型提供了改进的学习行为,但它们忽略了体积纵向维度的上下文信息。在这里,我们为生成模型(Provogan)介绍了一种新型的渐进式体积化策略,该策略将复杂的体积图像恢复任务串行分解为连续的跨截面映射任务 - 在各个电向尺寸之间进行了优越的订购。 Provogan有效地捕获了全球环境,并在所有维度上恢复了精细的结构细节,同时保持了低模型的复杂性和改善的学习行为。关于主流MRI重建和合成任务的全面演示表明,Provogan对最先进的体积和横截面模型产生了卓越的性能。

Magnetic resonance imaging (MRI) offers the flexibility to image a given anatomic volume under a multitude of tissue contrasts. Yet, scan time considerations put stringent limits on the quality and diversity of MRI data. The gold-standard approach to alleviate this limitation is to recover high-quality images from data undersampled across various dimensions, most commonly the Fourier domain or contrast sets. A primary distinction among recovery methods is whether the anatomy is processed per volume or per cross-section. Volumetric models offer enhanced capture of global contextual information, but they can suffer from suboptimal learning due to elevated model complexity. Cross-sectional models with lower complexity offer improved learning behavior, yet they ignore contextual information across the longitudinal dimension of the volume. Here, we introduce a novel progressive volumetrization strategy for generative models (ProvoGAN) that serially decomposes complex volumetric image recovery tasks into successive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions. ProvoGAN effectively captures global context and recovers fine-structural details across all dimensions, while maintaining low model complexity and improved learning behaviour. Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源