论文标题

变分贝叶斯量化

Variational Bayesian Quantization

论文作者

Yang, Yibo, Bamler, Robert, Mandt, Stephan

论文摘要

我们提出了一种新型算法,用于量化训练有素的模型中的连续潜在表示。我们的方法适用于深层概率模型,例如变分自动编码器(VAE),并启用数据和模型压缩。与当前端到端神经压缩方法迎合固定量化方案,我们的算法将模型设计和训练与量化分开。因此,我们的算法可以使用单个训练有素的模型来启用“插件”压缩,并具有可变的利率折叠权衡。我们的算法可以看作是算术编码到连续域的新型扩展,并根据后不确定性的估计来使用自适应量化精度。我们的实验结果证明了考虑后不确定性的重要性,并证明了仅使用单个标准vae,带有所提出的算法的图像压缩在广泛的比特率上优于JPEG。贝叶斯神经词嵌入的进一步实验证明了该方法的多功能性。

We propose a novel algorithm for quantizing continuous latent representations in trained models. Our approach applies to deep probabilistic models, such as variational autoencoders (VAEs), and enables both data and model compression. Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization. Consequently, our algorithm enables "plug-and-play" compression with variable rate-distortion trade-off, using a single trained model. Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quantization accuracy based on estimates of posterior uncertainty. Our experimental results demonstrate the importance of taking into account posterior uncertainties, and show that image compression with the proposed algorithm outperforms JPEG over a wide range of bit rates using only a single standard VAE. Further experiments on Bayesian neural word embeddings demonstrate the versatility of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源