论文标题

隐式子空间事先学习双盲面修复

Implicit Subspace Prior Learning for Dual-Blind Face Restoration

论文作者

Yang, Lingbo, Wang, Pan, Gao, Zhanning, Wang, Shanshe, Ren, Peiran, Ma, Siwei, Gao, Wen

论文摘要

面部恢复是一个固有的问题,通常认为其他先前的限制对于缓解这种病理至关重要。但是,实际上很难使用精确的数学模型来模拟现实世界图像,这不可避免地限制了现有的先前调查恢复方法的性能和概括能力。在本文中,我们研究了更实用的``双盲''设置下的面部恢复问题,即没有事先假设或手工制作的正则化术语在退化概况或图像内容上。 为此,提出了一个新颖的隐性子空间先验学习(ISPL)框架作为双盲面恢复的一种通用解决方案,其中有两个关键要素:1)一种隐含的配方,以绕过不确定的恢复映射和2)一个子空间的分解和融合机制,以使其在一致的恢复水平上动态恢复效果,以使投入能力保持一致性高高,使得能够恢复高级效果。 实验结果表明,针对各种恢复子任务,包括3.69db PSNR和45.8%的FID对ESRGAN(2018 NTIRE SR挑战赢家),ISPL对现有的最新方法的ISPL显着改善。总体而言,我们证明可以在不明确制定它的情况下捕获和利用先验知识,这将有助于激发新的研究范式朝着低水平的视力任务范围内。

Face restoration is an inherently ill-posed problem, where additional prior constraints are typically considered crucial for mitigating such pathology. However, real-world image prior are often hard to simulate with precise mathematical models, which inevitably limits the performance and generalization ability of existing prior-regularized restoration methods. In this paper, we study the problem of face restoration under a more practical ``dual blind'' setting, i.e., without prior assumptions or hand-crafted regularization terms on the degradation profile or image contents. To this end, a novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration, with two key elements: 1) an implicit formulation to circumvent the ill-defined restoration mapping and 2) a subspace prior decomposition and fusion mechanism to dynamically handle inputs at varying degradation levels with consistent high-quality restoration results. Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods for a variety of restoration subtasks, including a 3.69db PSNR and 45.8% FID gain against ESRGAN, the 2018 NTIRE SR challenge winner. Overall, we prove that it is possible to capture and utilize prior knowledge without explicitly formulating it, which will help inspire new research paradigms towards low-level vision tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源