论文标题
3D CNN中的时空融合:概率视图
Spatiotemporal Fusion in 3D CNNs: A Probabilistic View
论文作者
论文摘要
尽管在静止图像识别方面取得了成功,但在过去几年中,用于时空信号任务的深层神经网络(例如,视频中的人为识别)仍然患有低疗效和效率低下。最近,人类专家已经付出了更多的努力来分析3D卷积神经网络(3D CNN)中不同组件的重要性,以设计更强大的时空学习骨架。除其他许多外,时空融合是必需品之一。它控制推断过程中如何在每层提取空间和时间信号。以前的尝试通常是从临时设计开始的,这些设计从经验上结合了某些卷积,然后根据训练相应的网络获得的性能得出结论。这些方法仅支持有关融合策略有限的网络级分析。在本文中,我们建议将时空融合策略转换为概率空间,这使我们能够对各种融合策略进行网络级别的评估,而不必单独训练它们。此外,我们还可以获得细粒的数值信息,例如概率空间内时空融合的层级偏好。我们的方法大大提高了分析时空融合的效率。根据概率空间,我们进一步生成了新的融合策略,这些策略可以在四个众所周知的动作识别数据集上实现最新性能。
Despite the success in still image recognition, deep neural networks for spatiotemporal signal tasks (such as human action recognition in videos) still suffers from low efficacy and inefficiency over the past years. Recently, human experts have put more efforts into analyzing the importance of different components in 3D convolutional neural networks (3D CNNs) to design more powerful spatiotemporal learning backbones. Among many others, spatiotemporal fusion is one of the essentials. It controls how spatial and temporal signals are extracted at each layer during inference. Previous attempts usually start by ad-hoc designs that empirically combine certain convolutions and then draw conclusions based on the performance obtained by training the corresponding networks. These methods only support network-level analysis on limited number of fusion strategies. In this paper, we propose to convert the spatiotemporal fusion strategies into a probability space, which allows us to perform network-level evaluations of various fusion strategies without having to train them separately. Besides, we can also obtain fine-grained numerical information such as layer-level preference on spatiotemporal fusion within the probability space. Our approach greatly boosts the efficiency of analyzing spatiotemporal fusion. Based on the probability space, we further generate new fusion strategies which achieve the state-of-the-art performance on four well-known action recognition datasets.