论文标题
lpspikecon:启用低精度尖峰神经网络处理,以有效地无监督对自主代理的持续学习
lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents
论文作者
论文摘要
最近的进步表明,基于SNN的系统可以有效地执行无监督的持续学习,因为它们的生物学知识学习规则,例如峰值依赖性依赖性可塑性(STDP)。这种学习能力对于需要不断适应动态变化的场景/环境等自主剂(例如机器人和无人机)等用例尤其有益,这些数据直接从环境中收集的新数据可能具有应在线学习的新颖功能。当前的最新作品对训练和推理阶段都采用高精度权重(即32位),从而构成高记忆和能源成本,从而阻碍了电池驱动的移动自动源系统的此类系统的有效嵌入式实现。另一方面,由于信息丢失而导致无监督的持续学习质量可能会损害精度。在此方面,我们提出了LPSpikeCon,这是一种新型方法,旨在使低精确的SNN处理能够有效地无视资源受限的自主剂/系统的持续学习。我们的LPSPIKECON方法采用以下关键步骤:(1)分析在无监督的持续学习环境下训练SNN模型的影响,其重量精度降低了推理精度; (2)利用这项研究确定对推论准确性有重大影响的SNN参数; (3)开发一种用于搜索各自的SNN参数值的算法,以提高无监督不断学习的质量。实验结果表明,我们的LPSPIKECON可以将SNN模型的重量记忆减少8倍(即,通过明智地采用4位权重)来进行在线培训,并与无监督的持续学习进行在线培训,并且与具有32位跨越不同网络的基线模型相比,推理阶段没有准确性损失。
Recent advances have shown that SNN-based systems can efficiently perform unsupervised continual learning due to their bio-plausible learning rule, e.g., Spike-Timing-Dependent Plasticity (STDP). Such learning capabilities are especially beneficial for use cases like autonomous agents (e.g., robots and UAVs) that need to continuously adapt to dynamically changing scenarios/environments, where new data gathered directly from the environment may have novel features that should be learned online. Current state-of-the-art works employ high-precision weights (i.e., 32 bit) for both training and inference phases, which pose high memory and energy costs thereby hindering efficient embedded implementations of such systems for battery-driven mobile autonomous systems. On the other hand, precision reduction may jeopardize the quality of unsupervised continual learning due to information loss. Towards this, we propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning on resource-constrained autonomous agents/systems. Our lpSpikeCon methodology employs the following key steps: (1) analyzing the impacts of training the SNN model under unsupervised continual learning settings with reduced weight precision on the inference accuracy; (2) leveraging this study to identify SNN parameters that have a significant impact on the inference accuracy; and (3) developing an algorithm for searching the respective SNN parameter values that improve the quality of unsupervised continual learning. The experimental results show that our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning and achieve no accuracy loss in the inference phase, as compared to the baseline model with 32-bit weights across different network sizes.