论文标题

Glitr:瞥见具有时空一致性的在线行动预测的变压器

GliTr: Glimpse Transformers with Spatiotemporal Consistency for Online Action Prediction

论文作者

Rangrej, Samrudhdhi B, Liang, Kevin J, Hassner, Tal, Clark, James J

论文摘要

许多在线行动预测模型都可以观察完整的框架,以定位并参与称为闪光的框架中的信息,并根据全球和本地信息识别持续的操作。但是,在具有约束资源的应用程序中,代理可能无法观察完整的框架,但仍必须找到有用的瞥见,以预测基于本地信息的不完整操作。在本文中,我们开发了瞥见变压器(GLITR),该变压器始终仅观察到狭窄的瞥见,从而预测了基于迄今为止收集的部分时空信息的持续动作和以下最有用的瞥见位置。在没有最佳瞥见位置以进行动作识别的地面的基本真理的情况下,我们使用新颖的时空一致性目标训练Glitr:我们要求Glitr参观具有类似于相应完整框架(即空间一致性)(即时空$ t $ quivalents $ T $ quivalent for Perement for Perement $ tepers pepally $ teples $ the $ the $ the $ the $ the $ e的特征)的特征(即,空间一致性)相似的特征。包括我们提出的一致性目标的包含在某种事物-V2(SSV2)数据集上的精度高约10%。总体而言,尽管仅观察到每帧总面积的33%,但Glitr在SSV2和Jester数据集上的精度分别达到了53.02%和93.91%。

Many online action prediction models observe complete frames to locate and attend to informative subregions in the frames called glimpses and recognize an ongoing action based on global and local information. However, in applications with constrained resources, an agent may not be able to observe the complete frame, yet must still locate useful glimpses to predict an incomplete action based on local information only. In this paper, we develop Glimpse Transformers (GliTr), which observe only narrow glimpses at all times, thus predicting an ongoing action and the following most informative glimpse location based on the partial spatiotemporal information collected so far. In the absence of a ground truth for the optimal glimpse locations for action recognition, we train GliTr using a novel spatiotemporal consistency objective: We require GliTr to attend to the glimpses with features similar to the corresponding complete frames (i.e. spatial consistency) and the resultant class logits at time $t$ equivalent to the ones predicted using whole frames up to $t$ (i.e. temporal consistency). Inclusion of our proposed consistency objective yields ~10% higher accuracy on the Something-Something-v2 (SSv2) dataset than the baseline cross-entropy objective. Overall, despite observing only ~33% of the total area per frame, GliTr achieves 53.02% and 93.91% accuracy on the SSv2 and Jester datasets, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源