论文标题

对粒子探测器边缘的低延迟推断的深神经网络的自动异质量化

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

论文作者

Coelho Jr., Claudionor N., Kuusela, Aki, Li, Shan, Zhuang, Hao, Aarrestad, Thea, Loncar, Vladimir, Ngadiuba, Jennifer, Pierini, Maurizio, Pol, Adrian Alan, Summers, Sioni

论文摘要

尽管寻求更准确的解决方案是将深度学习研究推向更大,更复杂的算法,但Edge设备需要有效的推断,因此模型尺寸,潜伏期和能源消耗的减少。限制模型大小的一种技术是量化,这意味着使用较少的位来表示权重和偏见。这种方法通常会导致性能下降。在这里,我们介绍了一种设计方法,用于设计深度神经网络模型的最佳异构量化版本,以在芯片上进行最小能源,高氧化,纳秒推理和完全自动化的部署。通过每层,每参数类型自动量化程序,从广泛的量化器中进行采样,模型消耗和尺寸可最小化,同时保持高精度。这对于在CERN大型强子对撞机上的质子 - 普罗顿碰撞中的事件选择程序至关重要,在该质子proton碰撞器中,资源受到严格限制,并且需要$ {\ Mathcal O}(1)〜μ $ S的延迟。在实地可编程门阵列硬件上实现时,纳秒推理和资源消耗减少了50倍。

Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton-proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of ${\mathcal O}(1)~μ$s is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源