论文标题
通过潜在辅助尖峰神经网络基于事件的视频重建
Event-based Video Reconstruction via Potential-assisted Spiking Neural Network
论文作者
论文摘要
神经形态视觉传感器是一种新的生物启发的成像范式,报告了异步,连续的每像素亮度变化,称为“事件”,具有高时间分辨率和高动态范围。到目前为止,基于事件的图像重建方法基于人工神经网络(ANN)或手工制作的时空平滑技术。在本文中,我们首先通过完全峰值神经网络(SNN)体系结构实施图像重建工作。作为生物启发的神经网络,随着时间的推移分布的异步二进制尖峰工作的SNN可能会提高事件驱动的硬件的计算效率。我们提出了一个基于事件的新型视频重建框架,该框架基于完全尖峰的神经网络(EVSNN),该框架利用了漏水的综合和传火(LIF)神经元和膜电位(MP)神经元。我们发现,尖峰神经元有可能存储有用的时间信息(内存)以完成此类时间有关的任务。此外,为了更好地利用时间信息,我们提出了使用尖峰神经元的膜电位的混合潜在辅助框架(PA-EVSNN)。提出的神经元称为自适应膜电位(AMP)神经元,该神经元根据输入尖峰自适应地更新膜电位。实验结果表明,我们的模型达到了与IJRR,MVSEC和HQF数据集上基于ANN的模型相当的性能。 EVSNN和PA-EVSNN的能源消耗分别比其ANN体系结构高出19.36美元$ \ times $和7.75 $ \ times $。
Neuromorphic vision sensor is a new bio-inspired imaging paradigm that reports asynchronous, continuously per-pixel brightness changes called `events' with high temporal resolution and high dynamic range. So far, the event-based image reconstruction methods are based on artificial neural networks (ANN) or hand-crafted spatiotemporal smoothing techniques. In this paper, we first implement the image reconstruction work via fully spiking neural network (SNN) architecture. As the bio-inspired neural networks, SNNs operating with asynchronous binary spikes distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron. We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks. Furthermore, to better utilize the temporal information, we propose a hybrid potential-assisted framework (PA-EVSNN) using the membrane potential of spiking neuron. The proposed neuron is referred as Adaptive Membrane Potential (AMP) neuron, which adaptively updates the membrane potential according to the input spikes. The experimental results demonstrate that our models achieve comparable performance to ANN-based models on IJRR, MVSEC, and HQF datasets. The energy consumptions of EVSNN and PA-EVSNN are 19.36$\times$ and 7.75$\times$ more computationally efficient than their ANN architectures, respectively.