论文标题
姿势和联合感知行动认可
Pose And Joint-Aware Action Recognition
论文作者
论文摘要
最近的动作识别进展主要集中在RGB和光流功能上。在本文中,我们解决了基于联合行动识别的问题。与其他方式不同,关节星座及其运动产生了具有简洁的人类运动信息的模型,以识别活动。我们提出了一个新的基于联合动作识别的模型,该模型首先通过共享的运动编码器分别从每个关节中提取运动特征,然后再执行集体推理。我们的联合选择器模块将关节信息重新进行了重量,以选择该任务的最判别关节。我们还提出了一种新型的关节对焦损失,该损失将一组关节特征汇总在一起,这些特征传达了相同的作用。我们通过使用几何感知数据增强技术来加强基于联合的表示,该技术在保留动作动力学的同时,使姿势热图令人震惊。我们对JHMDB,HMDB,Charades,Ava Action识别数据集的当前基于最新的联合方法的进步很大。与RGB和基于流动的方法的晚期融合可产生其他改进。我们的模型还胜过Mimetics的现有基线,Mimetics是一个具有外在操作的数据集。
Recent progress on action recognition has mainly focused on RGB and optical flow features. In this paper, we approach the problem of joint-based action recognition. Unlike other modalities, constellation of joints and their motion generate models with succinct human motion information for activity recognition. We present a new model for joint-based action recognition, which first extracts motion features from each joint separately through a shared motion encoder before performing collective reasoning. Our joint selector module re-weights the joint information to select the most discriminative joints for the task. We also propose a novel joint-contrastive loss that pulls together groups of joint features which convey the same action. We strengthen the joint-based representations by using a geometry-aware data augmentation technique which jitters pose heatmaps while retaining the dynamics of the action. We show large improvements over the current state-of-the-art joint-based approaches on JHMDB, HMDB, Charades, AVA action recognition datasets. A late fusion with RGB and Flow-based approaches yields additional improvements. Our model also outperforms the existing baseline on Mimetics, a dataset with out-of-context actions.