论文标题

编译尖峰神经网络以减轻神经形态硬件约束

Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware Constraints

论文作者

Balaji, Adarsha, Das, Anup

论文摘要

尖峰神经网络(SNN)是有效的计算模型,可在{resource} - 和{power}限制的平台上执行时空模式识别。在神经形态硬件上执行的SNN可以进一步减少这些平台的能耗。随着模型大小和复杂性的增加,将基于SNN的应用程序映射到基于瓷砖的神经形态硬件变得越来越具有挑战性。这归因于神经突触核的局限性,即。一个横梁,仅适用于每个突触后神经元的固定数量的突触前连接。对于基于SNN的复杂模型,每个神经元具有许多神经元和突触前连接的模型,(1)在训练后可能需要修剪连接以适应横梁资源,从而导致模型质量的损失,例如准确性,(2)神经元和突触需要分配并放置在神经元素上,可以将latence置于效率上,可以增加效率的硬件,这可能会导致硬化的硬件,这可能会导致硬化的硬化,以使其能够效果,这可能会导致效果。 In this work, we propose (1) a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units to significantly improve the crossbar utilization and retain all pre-synaptic connections, and (2) SpiNeMap, a novel methodology to map SNNs on neuromorphic hardware with an aim to minimize energy consumption and spike latency.

Spiking Neural Networks (SNNs) are efficient computation models to perform spatio-temporal pattern recognition on {resource}- and {power}-constrained platforms. SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms. With increasing model size and complexity, mapping SNN-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is attributed to the limitations of neuro-synaptic cores, viz. a crossbar, to accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN-based models that have many neurons and pre-synaptic connections per neuron, (1) connections may need to be pruned after training to fit onto the crossbar resources, leading to a loss in model quality, e.g., accuracy, and (2) the neurons and synapses need to be partitioned and placed on the neuro-sypatic cores of the hardware, which could lead to increased latency and energy consumption. In this work, we propose (1) a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units to significantly improve the crossbar utilization and retain all pre-synaptic connections, and (2) SpiNeMap, a novel methodology to map SNNs on neuromorphic hardware with an aim to minimize energy consumption and spike latency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源