论文标题

视频通过拟合进行测试数据来造影

Video Deblurring by Fitting to Test Data

论文作者

Ren, Xuanchi, Qian, Zian, Chen, Qifeng

论文摘要

自动驾驶汽车和机器人捕获的视频中的运动模糊可以降低其感知能力。在这项工作中,我们通过将深层网络拟合到测试视频中,展示了一种新颖的视频消除方法。我们的主要观察结果是,视频中有动作模糊的某些框架比其他框架更加清晰,因此我们可以将这些锋利框架中的纹理信息传输到模糊框架中。我们的方法启发式从视频中选择锋利的框架,然后在这些锋利的框架上训练卷积神经网络。训练有素的网络通常会在场景中吸收足够的细节,以在所有视频帧上执行Debluring。作为一种内部学习方法,我们的方法在培训和测试数据之间没有域差距,这对于现有视频脱张的方法来说是一个问题的问题。在现实世界视频数据上进行的实验表明,与最先进的视频DeBlurring方法相比,我们的模型可以重建清晰,更清晰的视频。代码和数据可从https://github.com/xrenaa/deblur-by-fitting获得。

Motion blur in videos captured by autonomous vehicles and robots can degrade their perception capability. In this work, we present a novel approach to video deblurring by fitting a deep network to the test video. Our key observation is that some frames in a video with motion blur are much sharper than others, and thus we can transfer the texture information in those sharp frames to blurry frames. Our approach heuristically selects sharp frames from a video and then trains a convolutional neural network on these sharp frames. The trained network often absorbs enough details in the scene to perform deblurring on all the video frames. As an internal learning method, our approach has no domain gap between training and test data, which is a problematic issue for existing video deblurring approaches. The conducted experiments on real-world video data show that our model can reconstruct clearer and sharper videos than state-of-the-art video deblurring approaches. Code and data are available at https://github.com/xrenaa/Deblur-by-Fitting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源