论文标题

Deltacnn:视频中稀疏框架差异的端到端CNN推断

DeltaCNN: End-to-End CNN Inference of Sparse Frame Differences in Videos

论文作者

Parger, Mathias, Tang, Chengcheng, Twigg, Christopher D., Keskin, Cem, Wang, Robert, Steinberger, Markus

论文摘要

视频数据上的卷积神经网络推断需要强大的硬件进行实时处理。鉴于连续帧之间的固有连贯性,视频的大部分通常变化很小。通过跳过相同的图像区域并截断了微不足道的像素更新,从理论上讲,计算冗余可以显着降低。但是,这些理论节省很难将其转化为实践,因为稀疏更新会阻碍计算一致性和内存访问相干。这是对实际硬件效率的关键。借助Deltacnn,我们提出了一个稀疏的卷积神经网络框架,该框架可以使缩写的逐帧更新在实践中加速视频推断。我们为所有典型的CNN层提供稀疏实现,并端到端传播稀疏功能更新 - 而不会随着时间的推移积累错误。 Deltacnn适用于所有卷积神经网络而无需重新培训。据我们所知,我们是第一个在实际环境中显着胜过密集参考的人,达到高达7倍的加速度,准确性的差异只有边际差异。

Convolutional neural network inference on video data requires powerful hardware for real-time processing. Given the inherent coherence across consecutive frames, large parts of a video typically change little. By skipping identical image regions and truncating insignificant pixel updates, computational redundancy can in theory be reduced significantly. However, these theoretical savings have been difficult to translate into practice, as sparse updates hamper computational consistency and memory access coherence; which are key for efficiency on real hardware. With DeltaCNN, we present a sparse convolutional neural network framework that enables sparse frame-by-frame updates to accelerate video inference in practice. We provide sparse implementations for all typical CNN layers and propagate sparse feature updates end-to-end - without accumulating errors over time. DeltaCNN is applicable to all convolutional neural networks without retraining. To the best of our knowledge, we are the first to significantly outperform the dense reference, cuDNN, in practical settings, achieving speedups of up to 7x with only marginal differences in accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源