论文标题

使旧电影重生

Bringing Old Films Back to Life

论文作者

Wan, Ziyu, Zhang, Bo, Chen, Dongdong, Liao, Jing

论文摘要

我们提出了一个基于学习的框架Recurrent Transformer网络(RTN),以恢复大量退化的旧电影。我们的方法没有执行框架的恢复,而是基于从相邻框架中学到的隐藏知识,这些知识包含有关遮挡的丰富信息,这有助于恢复每个框架的挑战性伪像,同时确保时间连贯性。此外,将当前框架和隐藏知识的表示形式进行对比,使以无监督的方式推断刮擦位置成为可能,并且这种缺陷本地化很好地推广到了现实世界中的降级。为了更好地解决混合降解并补偿框架对齐过程中的流量估计误差,我们建议利用更具表现力的变压器块进行空间恢复。合成数据集和现实世界旧电影的实验证明了拟议的RTN优于现有溶液的优势。此外,相同的框架可以有效地传播从关键框架到整个视频的颜色,最终产生引人注目的恢复电影。该实现和模型将在https://github.com/raywzy/bringing of-films-back-to-to-life上发布。

We present a learning-based framework, recurrent transformer network (RTN), to restore heavily degraded old films. Instead of performing frame-wise restoration, our method is based on the hidden knowledge learned from adjacent frames that contain abundant information about the occlusion, which is beneficial to restore challenging artifacts of each frame while ensuring temporal coherency. Moreover, contrasting the representation of the current frame and the hidden knowledge makes it possible to infer the scratch position in an unsupervised manner, and such defect localization generalizes well to real-world degradations. To better resolve mixed degradation and compensate for the flow estimation error during frame alignment, we propose to leverage more expressive transformer blocks for spatial restoration. Experiments on both synthetic dataset and real-world old films demonstrate the significant superiority of the proposed RTN over existing solutions. In addition, the same framework can effectively propagate the color from keyframes to the whole video, ultimately yielding compelling restored films. The implementation and model will be released at https://github.com/raywzy/Bringing-Old-Films-Back-to-Life.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源