论文标题

带有尖峰神经元的脑启发的多层感知器

Brain-inspired Multilayer Perceptron with Spiking Neurons

论文作者

Li, Wenshuo, Chen, Hanting, Guo, Jianyuan, Zhang, Ziyang, Wang, Yunhe

论文摘要

最近,多层感知器(MLP)成为计算机视觉任务领域的热点。没有电感偏差,MLP在特征提取方面表现良好,并取得了惊人的结果。但是,由于其结构的简单性,性能在很大程度上取决于局部特征通信气概。为了进一步提高MLP的性能,我们引入了来自脑启发的神经网络的信息通信机制。尖峰神经网络(SNN)是最著名的脑启发的神经网络,在处理稀疏数据方面取得了巨大的成功。 SNN中的泄漏集成和火(LIF)神经元用于在不同时间步骤之间进行通信。在本文中,我们将LIF神经元的Machanism纳入MLP模型中,以在没有额外的失败的情况下获得更好的准确性。我们提出了一个完整的LIF操作,以在补丁之间进行通信,包括在不同方向上的水平LIF和垂直LIF。我们还建议使用组LIF来提取更好的本地功能。借助LIF模块,我们的SNN-MLP模型在Imagenet数据集上仅获得81.9%,83.3%和83.5%的TOP-1准确性,分别仅为4.4g,8.5g和15.2g Flops,这是我们所知道的最先进的结果。

Recently, Multilayer Perceptron (MLP) becomes the hotspot in the field of computer vision tasks. Without inductive bias, MLPs perform well on feature extraction and achieve amazing results. However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism. To further improve the performance of MLP, we introduce information communication mechanisms from brain-inspired neural networks. Spiking Neural Network (SNN) is the most famous brain-inspired neural network, and achieve great success on dealing with sparse data. Leaky Integrate and Fire (LIF) neurons in SNNs are used to communicate between different time steps. In this paper, we incorporate the machanism of LIF neurons into the MLP models, to achieve better accuracy without extra FLOPs. We propose a full-precision LIF operation to communicate between patches, including horizontal LIF and vertical LIF in different directions. We also propose to use group LIF to extract better local features. With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and 83.5% top-1 accuracy on ImageNet dataset with only 4.4G, 8.5G and 15.2G FLOPs, respectively, which are state-of-the-art results as far as we know.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源