论文标题
将尖峰神经网络汇编为神经形态硬件
Compiling Spiking Neural Networks to Neuromorphic Hardware
论文作者
论文摘要
使用基于尖峰的计算模型(例如,尖峰神经网络(SNN))实施的机器学习应用程序具有巨大的潜力,可以在神经形态硬件上执行时降低能源消耗。但是,将SNN编译到硬件是具有挑战性的,尤其是当需要在SNN的神经元和突触之间共享硬件的计算和存储资源(即crossbar)时。我们提出了一种在资源受限的神经形态硬件上分析和编译SNN的方法,从而为关键性能指标(例如执行时间和吞吐量)提供保证。我们的方法做出以下三个关键贡献。首先,我们提出了一种贪婪的技术,将SNN划分为神经元和突触的簇,以使每个群集都可以适合横梁的资源。其次,我们利用同步数据流图(SDFG)的丰富语义和表达性代表群集的SNN,并使用最大代数代数分析其性能,考虑到可用的计算和存储容量,缓冲区大小和通信带宽。第三,我们提出了一种基于执行的快速技术,在运行时将基于SNN的应用程序编译并允许基于SNN的应用程序,以动态地适应硬件上的可用资源。我们通过基于标准的SNN应用程序评估了我们的方法,并证明了与当前实践相比的性能改善。
Machine learning applications that are implemented with spike-based computation model, e.g., Spiking Neural Network (SNN), have a great potential to lower the energy consumption when they are executed on a neuromorphic hardware. However, compiling and mapping an SNN to the hardware is challenging, especially when compute and storage resources of the hardware (viz. crossbar) need to be shared among the neurons and synapses of the SNN. We propose an approach to analyze and compile SNNs on a resource-constrained neuromorphic hardware, providing guarantee on key performance metrics such as execution time and throughput. Our approach makes the following three key contributions. First, we propose a greedy technique to partition an SNN into clusters of neurons and synapses such that each cluster can fit on to the resources of a crossbar. Second, we exploit the rich semantics and expressiveness of Synchronous Dataflow Graphs (SDFGs) to represent a clustered SNN and analyze its performance using Max-Plus Algebra, considering the available compute and storage capacities, buffer sizes, and communication bandwidth. Third, we propose a self-timed execution-based fast technique to compile and admit SNN-based applications to a neuromorphic hardware at run-time, adapting dynamically to the available resources on the hardware. We evaluate our approach with standard SNN-based applications and demonstrate a significant performance improvement compared to current practices.