论文标题
一个28 nm的卷积神经形态处理器,可通过基于尖峰的视网膜在线学习
A 28-nm Convolutional Neuromorphic Processor Enabling Online Learning with Spike-Based Retinas
论文作者
论文摘要
为了遵循生物信息表示和组织原理,通常从生物物理模型到硅中的大规模整合的神经形态工程领域通常接近自下而上。虽然理想是用于认知计算和神经科学的实验平台,但自下而上的神经形态处理器尚未证明与针对现实世界中问题的专门神经网络加速器相比,效率优势。自上而下的方法旨在通过(i)从应用问题开始,以及(ii)研究如何使相关算法硬件有效且具有生物学上的可行性来解决这一困难。为了利用基于尖峰的神经形态视网膜的数据稀疏性进行自适应边缘计算和视力应用,我们遵循自上而下的方法,并提出了汤匙(28 nm事件驱动的CNN(ECNN))。它仅使用16.8%的功率和11.8%的面积嵌入在线学习,并具有生物学上可行的直接随机目标投影(DRTP)算法。每次分类的能量为313nJ,为0.6V和0.32毫米$^2 $面积,用于95.3%(芯片培训)的精度和97.5%(芯片培训)的MNIST,我们证明,Spoon表明,在与事件的启动下嵌入了常规的机器学习促进剂的效率,并兼容了事件的良好态度,并以此为基础。基准测试。
In an attempt to follow biological information representation and organization principles, the field of neuromorphic engineering is usually approached bottom-up, from the biophysical models to large-scale integration in silico. While ideal as experimentation platforms for cognitive computing and neuroscience, bottom-up neuromorphic processors have yet to demonstrate an efficiency advantage compared to specialized neural network accelerators for real-world problems. Top-down approaches aim at answering this difficulty by (i) starting from the applicative problem and (ii) investigating how to make the associated algorithms hardware-efficient and biologically-plausible. In order to leverage the data sparsity of spike-based neuromorphic retinas for adaptive edge computing and vision applications, we follow a top-down approach and propose SPOON, a 28-nm event-driven CNN (eCNN). It embeds online learning with only 16.8-% power and 11.8-% area overheads with the biologically-plausible direct random target projection (DRTP) algorithm. With an energy per classification of 313nJ at 0.6V and a 0.32-mm$^2$ area for accuracies of 95.3% (on-chip training) and 97.5% (off-chip training) on MNIST, we demonstrate that SPOON reaches the efficiency of conventional machine learning accelerators while embedding on-chip learning and being compatible with event-based sensors, a point that we further emphasize with N-MNIST benchmarking.