论文标题

运动吸引动态图形神经网络,用于视频压缩感测

Motion-aware Dynamic Graph Neural Network for Video Compressive Sensing

论文作者

Lu, Ruiying, Cheng, Ziheng, Chen, Bo, Yuan, Xin

论文摘要

视频快照压缩成像(SCI)利用2D检测器捕获顺序的视频帧并将其压缩为单个测量。已经开发了各种重建方法,以从快照测量中恢复高速视频帧。但是,大多数现有的重建方法无法有效捕获长期空间和时间依赖性,这对于视频处理至关重要。在本文中,我们提出了一种基于图形神经网络(GNN)的灵活而健壮的方法,以有效地模拟空间和时间中像素之间的非本地相互作用,无论距离如何。具体而言,我们开发了一种动态感动的动态GNN,以提供更好的视频表示形式,即表示每个节点作为逐帧运动指导的相对邻居的聚集,该动态运动包括运动感知动态抽样,跨尺度节点采样,全球知识集成和图集合。模拟和实际数据的广泛结果证明了所提出方法的有效性和效率,可视化说明了我们提出的模型的内在动态抽样操作,以提高视频SCI重建结果。代码和模型将发布。

Video snapshot compressive imaging (SCI) utilizes a 2D detector to capture sequential video frames and compress them into a single measurement. Various reconstruction methods have been developed to recover the high-speed video frames from the snapshot measurement. However, most existing reconstruction methods are incapable of efficiently capturing long-range spatial and temporal dependencies, which are critical for video processing. In this paper, we propose a flexible and robust approach based on the graph neural network (GNN) to efficiently model non-local interactions between pixels in space and time regardless of the distance. Specifically, we develop a motion-aware dynamic GNN for better video representation, i.e., represent each node as the aggregation of relative neighbors under the guidance of frame-by-frame motions, which consists of motion-aware dynamic sampling, cross-scale node sampling, global knowledge integration, and graph aggregation. Extensive results on both simulation and real data demonstrate both the effectiveness and efficiency of the proposed approach, and the visualization illustrates the intrinsic dynamic sampling operations of our proposed model for boosting the video SCI reconstruction results. The code and model will be released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源