论文标题

PIM-QAT:内存处理(PIM)系统的神经网络量化

PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

论文作者

Jin, Qing, Chen, Zhiyu, Ren, Jian, Li, Yanyu, Wang, Yanzhi, Yang, Kaiyuan

论文摘要

内存处理(PIM)是一种越来越多的神经形态硬件,承诺能量和吞吐量改进以进行深度学习推断。 PIM利用记忆内的大量平行和高效的模拟计算,绕过传统数字硬件中数据运动的瓶颈。但是,需要一个额外的量化步骤(即PIM量化),通常由于硬件约束而导致的分辨率有限,才能将模拟计算结果转换为数字域。同时,由于不完美的类似物到数字界面,非理想效应在PIM量化中广泛存在,这进一步损害了推理的准确性。 在本文中,我们提出了一种培训量化网络的方法,以结合PIM量化,这对所有PIM系统都是无处不在的。具体而言,我们提出了PIM量化意识培训(PIM-QAT)算法,并通过分析训练动力学以促进训练收敛,从而在向后和前进过程中引入重新传播技术。我们还提出了两种技术,即批处理归一化(BN)校准和调整的精确训练,以抑制实际PIM芯片中涉及的非理想线性和随机热噪声的不利影响。我们的方法在三个主流PIM分解方案上进行了验证,并在原型芯片上进行了物理上的验证。与直接在PIM系统上部署常规训练的量化模型相比,该模型没有考虑到此额外的量化步骤并因此失败,我们的方法提供了重大改进。它还可以在CIFAR10和CIFAR100数据集中使用各种网络深度来实现最流行的网络拓扑结构的CIFAR10和CIFAR100数据集,从而在PIM系统上达到了可比的推理精度。

Processing-in-memory (PIM), an increasingly studied neuromorphic hardware, promises orders of energy and throughput improvements for deep learning inference. Leveraging the massively parallel and efficient analog computing inside memories, PIM circumvents the bottlenecks of data movements in conventional digital hardware. However, an extra quantization step (i.e. PIM quantization), typically with limited resolution due to hardware constraints, is required to convert the analog computing results into digital domain. Meanwhile, non-ideal effects extensively exist in PIM quantization because of the imperfect analog-to-digital interface, which further compromises the inference accuracy. In this paper, we propose a method for training quantized networks to incorporate PIM quantization, which is ubiquitous to all PIM systems. Specifically, we propose a PIM quantization aware training (PIM-QAT) algorithm, and introduce rescaling techniques during backward and forward propagation by analyzing the training dynamics to facilitate training convergence. We also propose two techniques, namely batch normalization (BN) calibration and adjusted precision training, to suppress the adverse effects of non-ideal linearity and stochastic thermal noise involved in real PIM chips. Our method is validated on three mainstream PIM decomposition schemes, and physically on a prototype chip. Comparing with directly deploying conventionally trained quantized model on PIM systems, which does not take into account this extra quantization step and thus fails, our method provides significant improvement. It also achieves comparable inference accuracy on PIM systems as that of conventionally quantized models on digital hardware, across CIFAR10 and CIFAR100 datasets using various network depths for the most popular network topology.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源