论文标题

直接实施神经形态的颞神经网络,用于感官处理

Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for Sensory Processing

论文作者

Nair, Harideep, Shen, John Paul, Smith, James E.

论文摘要

时间神经网络(TNNS)使用时间作为代表和处理信息的资源,模仿哺乳动物新皮层的行为。这项工作着重于使用现成的数字CMOS技术实施TNN。引入了一个微观架构框架,并具有构建块的层次结构,包括:多神经柱,多列层和多层TNN。我们介绍了多神经列模型的直接CMOS门级实现,作为TNNS的关键构建块。使用Synopsys工具和45 nm CMOS标准细胞库获得了合成后的结果。 TNN微结构框架体现在一组特征方程式中,用于评估任何TNN设计的总门数,模具面积,计算时间和功耗。我们开发了32m门的多层TNN原型。在7 nm CMOS过程中,它仅消耗1.54毫米^2个模具面积和7.26兆瓦的功率,并且可以以107m fps(每张图像9.34 ns)处理28x28图像。我们相对于最近的最新TNN模型评估了原型的性能和复杂性。

Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex. This work focuses on implementing TNNs using off-the-shelf digital CMOS technology. A microarchitecture framework is introduced with a hierarchy of building blocks including: multi-neuron columns, multi-column layers, and multi-layer TNNs. We present the direct CMOS gate-level implementation of the multi-neuron column model as the key building block for TNNs. Post-synthesis results are obtained using Synopsys tools and the 45 nm CMOS standard cell library. The TNN microarchitecture framework is embodied in a set of characteristic equations for assessing the total gate count, die area, compute time, and power consumption for any TNN design. We develop a multi-layer TNN prototype of 32M gates. In 7 nm CMOS process, it consumes only 1.54 mm^2 die area and 7.26 mW power and can process 28x28 images at 107M FPS (9.34 ns per image). We evaluate the prototype's performance and complexity relative to a recent state-of-the-art TNN model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源