论文标题
大规模计算成像的记忆效率学习
Memory-efficient Learning for Large-scale Computational Imaging
论文作者
论文摘要
可以通过基于经典模型的重建(称为基于物理学的网络)形成的深层网络来优化计算成像系统的关键方面,例如实验设计和图像先验。但是,对于现实世界中的大规模逆问题,由于图形处理单元的内存限制,通过反向传播计算梯度是不可行的。在这项工作中,我们提出了一个记忆效率的学习过程,该过程利用了网络层的可逆性,以实现大规模计算成像系统的数据驱动设计。我们在一个小规模的压缩传感示例上演示了我们的方法,以及两个大型现实世界系统:多通道磁共振成像和超分辨率光学显微镜。
Critical aspects of computational imaging systems, such as experimental design and image priors, can be optimized through deep networks formed by the unrolled iterations of classical model-based reconstructions (termed physics-based networks). However, for real-world large-scale inverse problems, computing gradients via backpropagation is infeasible due to memory limitations of graphics processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network's layers to enable data-driven design for large-scale computational imaging systems. We demonstrate our method on a small-scale compressed sensing example, as well as two large-scale real-world systems: multi-channel magnetic resonance imaging and super-resolution optical microscopy.