论文标题

合奏网络的自适应未来框架预测

Adaptive Future Frame Prediction with Ensemble Network

论文作者

Kim, Wonjik, Tanaka, Masayuki, Okutomi, Masatoshi, Sasaki, Yoko

论文摘要

视频中未来的框架预测是一个具有挑战性的问题,因为视频包括复杂的动作和较大的外观变化。基于学习的未来框架预测方法是在文献中提出的。现有基于学习的方法的一个共同局限性是培训数据和测试数据的不匹配。在将来的框架预测任务中,我们可以通过等待几个帧来获得地面真相数据。这意味着我们可以在测试阶段在线更新预测模型。然后,我们为将来的框架预测任务提出了一个自适应更新框架。所提出的自适应更新框架由预训练的预测网络,连续升级的预测网络和权重估计网络组成。我们还表明,我们的预培训预测模型可以达到与现有最新方法相当的性能。我们证明我们的方法优于现有方法,尤其是对于动态变化的场景而言。

Future frame prediction in videos is a challenging problem because videos include complicated movements and large appearance changes. Learning-based future frame prediction approaches have been proposed in kinds of literature. A common limitation of the existing learning-based approaches is a mismatch of training data and test data. In the future frame prediction task, we can obtain the ground truth data by just waiting for a few frames. It means we can update the prediction model online in the test phase. Then, we propose an adaptive update framework for the future frame prediction task. The proposed adaptive updating framework consists of a pre-trained prediction network, a continuous-updating prediction network, and a weight estimation network. We also show that our pre-trained prediction model achieves comparable performance to the existing state-of-the-art approaches. We demonstrate that our approach outperforms existing methods especially for dynamically changing scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源