论文标题

Spikesim:用于基准尖峰神经网络的端到端计算机硬件评估工具

SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks

论文作者

Moitra, Abhishek, Bhattacharjee, Abhiroop, Kuang, Runcong, Krishnan, Gokul, Cao, Yu, Panda, Priyadarshini

论文摘要

SNN是针对节能机器智能的积极研究领域。与传统的ANN相比,SNN使用时间尖峰数据和生物成分神经元激活功能,例如泄漏 - 整合火/集成火(LIF/IF)进行数据处理。但是,SNN在标准VON-NEUMANN计算平台中产生了重要的点产品操作,导致高内存和计算开销。如今,已经提出了内存计算(IMC)架构,以减轻von-Neumann Architectures中普遍存在的“内存壁瓶颈”。尽管最近的作品提出了基于IMC的SNN硬件加速器,但以下作用被忽略了-1)横键非理想性对SNN性能的不利影响,这是由于多个时步上的重复模拟点 - 产品操作,2)基本SNN特异性组件(例如LIF/IF/IF和数据通信模式)的硬件间接费用。为此,我们提出了Spikesim,该工具可以执行IMC映射SNN的现实性能,能量,潜伏期和区域评估。 Spikesim由一个实用的单片IMC架构组成,称为SpikeFlow用于映射SNNS。此外,非理想计算引擎(NICE)和能量延迟区域(ELA)发动机对SpikeFlow-Mapper-Mapper-Mapper-Mapper-Mapper-the SNN进行了硬件真实评估。基于CIFAR10,CIFAR100和TinyImagenet数据集的65nm CMOS实现和实验,我们发现LIF/IF神经元模块具有显着的面积贡献(占总硬件总面积的11%)。我们提出了SNN拓扑修饰,导致神经元模块面积和总体能量延迟产品值的1.24倍和10倍降低。此外,在这项工作中,我们在IMC实施的ANN和SNN之间进行了整体比较,并得出结论,与4位ANN相比,较低的时间步长是实现SNN的更高吞吐量和能源效率的关键。

SNNs are an active research domain towards energy efficient machine intelligence. Compared to conventional ANNs, SNNs use temporal spike data and bio-plausible neuronal activation functions such as Leaky-Integrate Fire/Integrate Fire (LIF/IF) for data processing. However, SNNs incur significant dot-product operations causing high memory and computation overhead in standard von-Neumann computing platforms. Today, In-Memory Computing (IMC) architectures have been proposed to alleviate the "memory-wall bottleneck" prevalent in von-Neumann architectures. Although recent works have proposed IMC-based SNN hardware accelerators, the following have been overlooked- 1) the adverse effects of crossbar non-ideality on SNN performance due to repeated analog dot-product operations over multiple time-steps, 2) hardware overheads of essential SNN-specific components such as the LIF/IF and data communication modules. To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim consists of a practical monolithic IMC architecture called SpikeFlow for mapping SNNs. Additionally, the non-ideality computation engine (NICE) and energy-latency-area (ELA) engine performs hardware-realistic evaluation of SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal module has significant area contribution (>11% of the total hardware area). We propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value, respectively. Furthermore, in this work, we perform a holistic comparison between IMC implemented ANN and SNNs and conclude that lower number of time-steps are the key to achieve higher throughput and energy-efficiency for SNNs compared to 4-bit ANNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源