论文标题

部分可观测时空混沌系统的无模型预测

Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A Memory-Efficient Inverse Problem Solver

论文作者

Guan, Peimeng, Jin, Jihui, Romberg, Justin, Davenport, Mark A.

论文摘要

在反问题中,我们旨在重建潜在损坏且经常进行不良的测量的一些潜在感兴趣的信号。基于经典优化的技术通过优化数据一致性度量与正常器来进行。当前的最新机器学习方法通​​过展开迭代更新以获得基于优化的求解器,然后从数据中学习正规器,从而从此类技术中汲取灵感。这种循环展开(LU)方法已显示出巨大的成功,但通常需要一个深层的模型,才能获得最佳性能,从而在训练过程中获得高度的记忆成本。因此,为了解决计算成本和网络表现力之间的平衡,我们提出了一种具有浅平衡正则化器(Luser)的LU算法。这些隐式模型与更深层次的卷积网络一样表现力,但在训练过程中的记忆力更大。根据图像脱张,计算机断层扫描(CT)以及单线圈磁共振成像(MRI)任务评估所提出的方法,并且在训练期间与较典型的LU结构与FeedForward verforward verforward verforder jordodarloseers相比,在训练过程中的计算资源最多要少8倍,并且表现出相似甚至更好的性能。

In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements. Classical optimization-based techniques proceed by optimizing a data consistency metric together with a regularizer. Current state-of-the-art machine learning approaches draw inspiration from such techniques by unrolling the iterative updates for an optimization-based solver and then learning a regularizer from data. This loop unrolling (LU) method has shown tremendous success, but often requires a deep model for the best performance leading to high memory costs during training. Thus, to address the balance between computation cost and network expressiveness, we propose an LU algorithm with shallow equilibrium regularizers (LUSER). These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training. The proposed method is evaluated on image deblurring, computed tomography (CT), as well as single-coil Magnetic Resonance Imaging (MRI) tasks and shows similar, or even better, performance while requiring up to 8 times less computational resources during training when compared against a more typical LU architecture with feedforward convolutional regularizers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源