论文标题

您只峰一次:将节能性神经形态推断提高到ANN级准确性

You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference to ANN-Level Accuracy

论文作者

P, Srivatsa, Chu, Kyle Timothy Ng, Amornpaisannon, Burin, Tavva, Yaswanth, Miriyala, Venkata Pavan Kumar, Wu, Jibin, Zhang, Malu, Li, Haizhou, Carlson, Trevor E.

论文摘要

在过去的十年中,人工神经网络(ANN)的进步使它们在各种任务方面的表现非常出色。实际上,例如,在执行图像识别时,它们达到了人类的平价。不幸的是,这些ANN的准确性以大量的缓存和/或内存访问和计算操作为代价。尖峰神经网络(SNNS)是一种神经形态或脑启发的网络,最近引起了人们对ANN的发电替代品的浓厚兴趣,因为它们稀疏,很少使用,通常仅使用加法操作,而不是使用更强大的多重多重多重和齐聚的(Mac)操作。绝大多数神经形态硬件设计都支持速率编码的SNN,其中信息是在尖峰速率中编码的。速率编码的SNN可以看作是一种编码方案,因为它涉及大量尖峰的传输。一个更有效的编码方案,即第一跨度(TTFS)编码的时间,在峰值到达的相对时间中编码信息。尽管TTFS编码的SNN比速率编码的SNN效率更高,但与以前的方法相比,它们的准确性方面的性能较差。因此,在这项工作中,我们旨在克服TTFS编码的神经形态系统的局限性。为此,我们提出:(1)一种针对TTFS编码的SNN的新颖优化算法,该算法从ANN转换为(2)针对TTFS编码的SNN的新型硬件加速器,具有可扩展性和低功率设计。总体而言,我们在TTFS编码和培训中的工作提高了SNN的准确性,以实现MNIST MLP的最先进结果,同时将功耗降低了1.46 $ \ times $ $ \ times $,而不是先进的神经型硬件。

In the past decade, advances in Artificial Neural Networks (ANNs) have allowed them to perform extremely well for a wide range of tasks. In fact, they have reached human parity when performing image recognition, for example. Unfortunately, the accuracy of these ANNs comes at the expense of a large number of cache and/or memory accesses and compute operations. Spiking Neural Networks (SNNs), a type of neuromorphic, or brain-inspired network, have recently gained significant interest as power-efficient alternatives to ANNs, because they are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate (MAC) operations. The vast majority of neuromorphic hardware designs support rate-encoded SNNs, where the information is encoded in spike rates. Rate-encoded SNNs could be seen as inefficient as an encoding scheme because it involves the transmission of a large number of spikes. A more efficient encoding scheme, Time-To-First-Spike (TTFS) encoding, encodes information in the relative time of arrival of spikes. While TTFS-encoded SNNs are more efficient than rate-encoded SNNs, they have, up to now, performed poorly in terms of accuracy compared to previous methods. Hence, in this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems. To accomplish this, we propose: (1) a novel optimization algorithm for TTFS-encoded SNNs converted from ANNs and (2) a novel hardware accelerator for TTFS-encoded SNNs, with a scalable and low-power design. Overall, our work in TTFS encoding and training improves the accuracy of SNNs to achieve state-of-the-art results on MNIST MLPs, while reducing power consumption by 1.46$\times$ over the state-of-the-art neuromorphic hardware.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源