论文标题
通过动态感知的视频生成隐性生成对抗网络
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks
论文作者
论文摘要
在深度学习时代,由于视频的时空复杂性和连续性,长期的高质量视频产生仍然具有挑战性。现有的先前工作试图通过将视频表示为RGB值的3D网格来建模视频分发,这阻碍了生成的视频和忽视连续动态的规模。在本文中,我们发现,最近将连续信号编码为参数化的神经网络的隐式神经表示(INR)的新兴范式有效地减轻了问题。通过利用视频INR,我们提出了动态感知的隐性生成对抗网络(Digan),这是一种新型的视频生成的生成对抗网络。具体而言,我们介绍了(a)基于INR的视频生成器,该视频生成器通过操纵空间和时间的坐标来改善运动动力学,并(b)运动区分器有效地识别不自然的运动而不观察整个长帧序列。我们在各个数据集中证明了Digan的优势,以及多个有趣的属性,例如长期视频综合,视频外推和非自动性视频生成。例如,Digan将UCF-101上先前的最先进的FVD得分提高了30.7%,并且可以在128x128分辨率的128帧视频上进行培训,比以前最先前的ART方法的48帧更长的80帧。
In the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7% and can be trained on 128 frame videos of 128x128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method.