论文标题

通过本地串联学习培训尖峰神经网络

Training Spiking Neural Networks with Local Tandem Learning

论文作者

Yang, Qu, Wu, Jibin, Zhang, Malu, Chua, Yansong, Wang, Xinchao, Li, Haizhou

论文摘要

尖峰神经网络(SNN)在生物学上比其前身更具生物学上的能量和节能性。但是,缺乏针对Deep SNN的有效且广泛的培训方法,尤其是用于模拟计算基板的部署。在本文中,我们提出了一项普遍的学习规则,称为本地串联学习(LTL)。 LTL规则通过模仿预训练的ANN的中间特征表示,遵循教师学习方法。通过解开网络层的学习并利用高度信息的主管信号,我们在CIFAR-10数据集上的五个培训时期内展示了快速的网络收敛,同时计算复杂性较低。我们的实验结果还表明,因此受过训练的SNN可以与CIFAR-10,CIFAR-100和Tiny Imagenet数据集中的老师ANN达到可比的精度。此外,拟议的LTL规则是硬件友好的。可以轻松地在片上实施以执行快速参数校准并为臭名昭著的设备非理想问题提供鲁棒性。因此,它为在超低功率的混合信号神经形态计算芯片上培训和部署SNN开辟了无数的机会。10

Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.10

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源