论文标题

神经失误图像压缩系统中复发性学习算法的经验分析

An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy Image Compression Systems

论文作者

Mali, Ankur, Ororbia, Alexander, Kifer, Daniel, Giles, Lee

论文摘要

深度学习的最新进展导致图像压缩算法在标准的柯达基准上优于JPEG和JPEG 2000。但是,训练速度很慢(由于反向交流时间),据我们所知,尚未在各种数据集中进行系统的评估。在本文中,我们对最近最新的混合神经压缩算法进行了第一个大规模比较,同时探索了替代培训策略的效果(适用)。混合复发的神经解码器是以前的最先进模型(最近被Google模型取代),可以使用Backprop-through-time(BPTT)或其他替代算法进行培训,或使用稀疏的专注回溯(SAB),无偏见的在线经过反复的优化(UORO)和实时重新培训(RERTIME RECERENT(RRTRLER))。我们将这些培训替代方案与6个基准数据集的Google模型(GOOG和E2E)进行了比较。令人惊讶的是,我们发现接受SAB训练的模型表现更好(超出BPTT的表现),从而产生更快的收敛性和更好的峰值信噪比。

Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark. However, they are slow to train (due to backprop-through-time) and, to the best of our knowledge, have not been systematically evaluated on a large variety of datasets. In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms, while exploring the effects of alternative training strategies (when applicable). The hybrid recurrent neural decoder is a former state-of-the-art model (recently overtaken by a Google model) that can be trained using backprop-through-time (BPTT) or with alternative algorithms like sparse attentive backtracking (SAB), unbiased online recurrent optimization (UORO), and real-time recurrent learning (RTRL). We compare these training alternatives along with the Google models (GOOG and E2E) on 6 benchmark datasets. Surprisingly, we found that the model trained with SAB performs better (outperforming even BPTT), resulting in faster convergence and a better peak signal-to-noise ratio.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源