论文标题

LIAF-NET:泄漏的集成和模拟消防网络,用于轻巧有效的时空信息处理

LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing

论文作者

Wu, Zhenzhi, Zhang, Hehui, Lin, Yihan, Li, Guoqi, Wang, Meng, Tang, Ye

论文摘要

基于泄漏的集成和火灾模型的尖峰神经网络(SNN)已应用于节能的时间和时空处理任务。多亏了可爱的神经元动态和简单性,LIF-SNN受益于事件驱动的处理,通常会面临降低性能的尴尬。这可能是因为在LIF-SNN中,神经元通过尖峰传输信息。为了解决这一问题,在这项工作中,我们提出了一个漏水的整合和模拟火(LIAF)神经元模型,以便可以在神经元之间传输模拟值,并在其上构建了一个深层网络,以实现其构建,以有效地进行时空处理。在时间领域,LIAF遵循传统的LIF动力学以保持其时间处理能力。在空间域中,LIAF能够通过卷积集成或完全连接的集成来整合空间信息。作为时空层,LIAF也可以与传统的人工神经网络(ANN)层共同使用。实验结果表明,LIAF-NET在BABI问答(QA)任务(QA)任务(QA)任务上的封闭式复发单元(GRU)和长期短期记忆(LSTM)的表现可比,并在时空时空动态视觉传感器(DVS)数据集(包括MNIST-DVS,CIFAR10-DVS和DVS1228)中实现了最先进的性能。与LSTM,GRU,卷积LSTM(Convlstm)或3D卷积(Conv3D)建立的传统网络相比,权重和计算间接费用。与传统的LIF-SNN相比,LIAF-NET在所有这些实验中还显示出巨大的准确性增长。总之,LIAF-NET提供了一个框架,结合了ANN和SNN的优势,以轻巧有效的时空信息处理。

Spiking neural networks (SNNs) based on Leaky Integrate and Fire (LIF) model have been applied to energy-efficient temporal and spatiotemporal processing tasks. Thanks to the bio-plausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven processing, however, usually faces the embarrassment of reduced performance. This may because in LIF-SNN the neurons transmit information via spikes. To address this issue, in this work, we propose a Leaky Integrate and Analog Fire (LIAF) neuron model, so that analog values can be transmitted among neurons, and a deep network termed as LIAF-Net is built on it for efficient spatiotemporal processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully-connected integration. As a spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. Experiment results indicate that LIAF-Net achieves comparable performance to Gated Recurrent Unit (GRU) and Long short-term memory (LSTM) on bAbI Question Answering (QA) tasks, and achieves state-of-the-art performance on spatiotemporal Dynamic Vision Sensor (DVS) datasets, including MNIST-DVS, CIFAR10-DVS and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, Convolutional LSTM (ConvLSTM) or 3D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient spatiotemporal information processing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源