论文标题
低和混合推理加速器
Low- and Mixed-Precision Inference Accelerators
论文作者
论文摘要
随着边缘计算的普及,有效地对电池约束的物联网设备进行神经网络推断的需求大大增加了。尽管算法开发使神经网络能够求解越来越复杂的任务,但由于精力,延迟和内存要求,这些网络在边缘设备上的部署可能会出现问题。减轻这些要求的一种方法是大量量化神经网络,即降低操作数的精度。通过将量化提高到极端,例如通过使用二进制值,出现了新的机会来提高能源效率。已经创建了一些利用低精度推理机会的硬件加速器,所有硬件旨在使神经网络推断在边缘。在本章中,审查了设计选择及其对支持极为量化网络的几个加速器的灵活性和能源效率的影响。
With the surging popularity of edge computing, the need to efficiently perform neural network inference on battery-constrained IoT devices has greatly increased. While algorithmic developments enable neural networks to solve increasingly more complex tasks, the deployment of these networks on edge devices can be problematic due to the stringent energy, latency, and memory requirements. One way to alleviate these requirements is by heavily quantizing the neural network, i.e. lowering the precision of the operands. By taking quantization to the extreme, e.g. by using binary values, new opportunities arise to increase the energy efficiency. Several hardware accelerators exploiting the opportunities of low-precision inference have been created, all aiming at enabling neural network inference at the edge. In this chapter, design choices and their implications on the flexibility and energy efficiency of several accelerators supporting extremely quantized networks are reviewed.