论文标题

基于骨架的动作识别的时间扩展模块

Temporal Extension Module for Skeleton-Based Action Recognition

论文作者

Obinata, Yuya, Yamamoto, Takuma

论文摘要

我们提出了一个模块,该模块扩展了图形卷积网络(GCN)的时间图,以用一系列骨架进行动作识别。现有的方法试图在框架内表示更合适的空间图,但忽略了Interframe上时间图的优化。具体而言,这些方法在仅与框架间相同关节相对应的顶点之间连接。在这项工作中,我们专注于将连接添加到框架间的相邻多个顶点,并根据扩展的时间图提取其他功能。我们的模块是一种简单而有效的方法,可以提取人类运动中多个关节的相关特征。此外,我们的模块有助于进一步的性能改进,以及其他仅优化空间图的GCN方法。我们对两个大型数据集进行了广泛的实验,即NTU RGB+D和动力学 - 骨骼,并证明我们的模块对几种现有模型有效,并且我们的最终模型实现了最新的性能。

We present a module that extends the temporal graph of a graph convolutional network (GCN) for action recognition with a sequence of skeletons. Existing methods attempt to represent a more appropriate spatial graph on an intra-frame, but disregard optimization of the temporal graph on the interframe. Concretely, these methods connect between vertices corresponding only to the same joint on the inter-frame. In this work, we focus on adding connections to neighboring multiple vertices on the inter-frame and extracting additional features based on the extended temporal graph. Our module is a simple yet effective method to extract correlated features of multiple joints in human movement. Moreover, our module aids in further performance improvements, along with other GCN methods that optimize only the spatial graph. We conduct extensive experiments on two large datasets, NTU RGB+D and Kinetics-Skeleton, and demonstrate that our module is effective for several existing models and our final model achieves state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源