论文标题
视频框架插值的基于分裂的合成
Splatting-based Synthesis for Video Frame Interpolation
论文作者
论文摘要
框架插值是一种必不可少的视频处理技术,可调整图像序列的时间分辨率。尽管深度学习为视频框架插值带来了很大的改进,但使用神经网络的技术通常不容易在视频编辑器之类的实用应用程序中部署,因为它们在计算上的要求过于要求,要么在高分辨率下失败。相比之下,我们提出了一种深度学习的方法,该方法仅依赖于碎片来合成插值框架。这种基于脱落的视频框架插值的合成不仅比类似方法快得多,尤其是对于多框架插值而言,还可以在高分辨率下产生新的最新结果。
Frame interpolation is an essential video processing technique that adjusts the temporal resolution of an image sequence. While deep learning has brought great improvements to the area of video frame interpolation, techniques that make use of neural networks can typically not easily be deployed in practical applications like a video editor since they are either computationally too demanding or fail at high resolutions. In contrast, we propose a deep learning approach that solely relies on splatting to synthesize interpolated frames. This splatting-based synthesis for video frame interpolation is not only much faster than similar approaches, especially for multi-frame interpolation, but can also yield new state-of-the-art results at high resolutions.