论文标题
重新思考运动表示:带有3D Convnets的残留帧,以获得更好的动作识别
Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition
论文作者
论文摘要
最近,3D卷积网络在行动识别中产生良好的性能。但是,仍然需要光流以确保更好的性能,其成本非常高。在本文中,我们提出了一种快速但有效的方法,以利用残留帧作为3D Convnets中的输入数据从视频中提取运动功能。通过用残留的RGB框架替换传统的RGB框架,在从刮擦训练时,可以在UCF101和HMDB51数据集上获得20.5%和12.5%的分数。由于残留框架几乎没有对象外观的信息,因此我们进一步使用2D卷积网络来提取外观特征,并将它们与残留框架的结果结合在一起,形成两路解决方案。在三个基准数据集中,我们的两路解决方案的性能要比使用其他光流方法更好或可比的性能,尤其是在迷你运动数据集上的最新模型。进一步的分析表明,可以使用带有3D Convnet的残留框架提取更好的运动功能,而我们的残留框架输入路径是现有RGB-Frame-Frame输入模型的良好补充。
Recently, 3D convolutional networks yield good performance in action recognition. However, optical flow stream is still needed to ensure better performance, the cost of which is very high. In this paper, we propose a fast but effective way to extract motion features from videos utilizing residual frames as the input data in 3D ConvNets. By replacing traditional stacked RGB frames with residual ones, 20.5% and 12.5% points improvements over top-1 accuracy can be achieved on the UCF101 and HMDB51 datasets when trained from scratch. Because residual frames contain little information of object appearance, we further use a 2D convolutional network to extract appearance features and combine them with the results from residual frames to form a two-path solution. In three benchmark datasets, our two-path solution achieved better or comparable performances than those using additional optical flow methods, especially outperformed the state-of-the-art models on Mini-kinetics dataset. Further analysis indicates that better motion features can be extracted using residual frames with 3D ConvNets, and our residual-frame-input path is a good supplement for existing RGB-frame-input models.