论文标题
Cloze Test帮助:通过学习完成视频事件的有效视频异常检测
Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events
论文作者
论文摘要
作为媒体内容解释中的至关重要的话题,视频异常检测(VAD)通过深神经网络(DNN)取得了成果。但是,现有方法通常遵循重建或框架预测例程。他们遭受了两个差距:(1)他们无法以精确和全面的方式本地化视频活动。 (2)他们缺乏足够的能力来利用高级语义和时间上下文信息。受语言研究中经常使用的披肩测试的启发,我们提出了一个名为“视频事件完成”(VEC)的全新VAD解决方案以上桥梁差距:首先,我们提出了一条新颖的管道,以实现视频活动的精确和全面的外壳。外观和运动被利用为相互互补的线索,以定位感兴趣的区域(ROI)。从每个ROI作为视频事件构建了标准化的时空立方体(STC),该视频事件奠定了VEC的基础并用作基本处理单元。其次,我们鼓励DNN通过求解视觉披肩测试来捕获高级语义。为了构建这样的视觉披肩测试,删除了一定的STC贴片以产生不完整的事件(IE)。 DNN通过推断丢失的补丁学会从IE恢复原始视频事件。第三,为了结合更丰富的运动动力学,训练了另一个DNN,以推断擦除的斑块的光流。最后,提出了两种使用不同类型的IE和模式的合奏策略来提高VAD性能,以便为VAD充分利用时间上下文和模态信息。 VEC在常用的VAD基准上始终以明显的边距(通常为1.5%-5%的AUROC)优于最先进的方法。我们的代码和结果可以在github.com/yuguangnudt/vec_vad上进行验证。
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.