论文标题

正则化,优化,次级性

Regularisation, optimisation, subregularity

论文作者

Valkonen, Tuomo

论文摘要

Banach空间中的正则化理论和非静态方正则化,即使在有限的维度中,也通常依赖于Bregman的分歧来替代规范收敛。这与将一阶优化方法扩展到Banach空间相当。然而,在描述性方面,布雷格曼的分歧可能有点偏好。使用(强)度量次数的概念,以前用于证明优化方法的局部融合快速融合,我们显示了Banach空间中的规范收敛性和非纳米平方正则化。对于诸如总变异正规图像重建等问题,度量次端口降低到地面真相的几何条件:地面真相中的平坦区域必须弥补前向操作员内核中没有二阶增长的忠诚项。我们证明这种正则化结果的方法是基于反问题的优化公式。作为我们发展的正则化理论的一面结果,我们为优化方法提供了正则化复杂性结果:我们必须采用多少个步骤$n_δ$,以作为近似解决方案作为腐败水平$Δ\ searrow 0 $融合的近似解决方案?

Regularisation theory in Banach spaces, and non--norm-squared regularisation even in finite dimensions, generally relies upon Bregman divergences to replace norm convergence. This is comparable to the extension of first-order optimisation methods to Banach spaces. Bregman divergences can, however, be somewhat suboptimal in terms of descriptiveness. Using the concept of (strong) metric subregularity, previously used to prove the fast local convergence of optimisation methods, we show norm convergence in Banach spaces and for non--norm-squared regularisation. For problems such as total variation regularised image reconstruction, the metric subregularity reduces to a geometric condition on the ground truth: flat areas in the ground truth have to compensate for the fidelity term not having second-order growth within the kernel of the forward operator. Our approach to proving such regularisation results is based on optimisation formulations of inverse problems. As a side result of the regularisation theory that we develop, we provide regularisation complexity results for optimisation methods: how many steps $N_δ$ of the algorithm do we have to take for the approximate solutions to converge as the corruption level $δ\searrow 0$?

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源