论文标题

Bregman插件先验

Bregman Plug-and-Play Priors

论文作者

Al-Shabili, Abdullah H., Xu, Xiaojian, Selesnick, Ivan, Kamilov, Ulugbek S.

论文摘要

在过去的几年中,围绕深度学习网络的整合和解决反问题的优化算法的活动激增。最近关于插入式先验的研究(PNP),通过denoising(红色)正规化以及深入展开的工作表明,这种整合在各种应用中的最新表现。但是,由于在投影和近端运算符中使用了二次规范,目前用于设计此类算法的范例本质上是欧几里得。我们建议通过考虑基于更一般的布雷格曼距离的非欧国人环境来扩大这种观点。我们的新的Bregman PNP(PNP-BPGM)和Bregman的近端梯度方法变体和Bregman最陡峭的下降型红色(RED-BSD)取代了PNP中的传统更新,从二次规范中取代了Bregman的距离。我们提出了PNP-BPGM的理论收敛结果,并证明了我们的算法对泊松线性反问题的有效性。

The past few years have seen a surge of activity around integration of deep learning networks and optimization algorithms for solving inverse problems. Recent work on plug-and-play priors (PnP), regularization by denoising (RED), and deep unfolding has shown the state-of-the-art performance of such integration in a variety of applications. However, the current paradigm for designing such algorithms is inherently Euclidean, due to the usage of the quadratic norm within the projection and proximal operators. We propose to broaden this perspective by considering a non-Euclidean setting based on the more general Bregman distance. Our new Bregman Proximal Gradient Method variant of PnP (PnP-BPGM) and Bregman Steepest Descent variant of RED (RED-BSD) replace the traditional updates in PnP and RED from the quadratic norms to more general Bregman distance. We present a theoretical convergence result for PnP-BPGM and demonstrate the effectiveness of our algorithms on Poisson linear inverse problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源