论文标题
Daleian神经网络的计算和学习优势
The computational and learning benefits of Daleian neural networks
论文作者
论文摘要
戴尔的原理意味着生物神经网络由兴奋性或抑制性的神经元组成。尽管此类Daleian网络的可能架构的数量呈指数级小于非Daleian网络,但使用大脑使用Daleian网络的计算和功能含义大多是未知的。在这里,我们使用复发性尖峰神经网络和基于费率的网络的模型来表明,尽管达利安网络对结构性限制,但它们可以近似于非Daleian网络执行的计算,以非常高的准确性。此外,我们发现Daleian网络在功能上更强大,可突触噪声。然后,我们表明,与非大力网络不同,达利安人可以通过调整单个神经元特征来有效地学习,并且几乎可以通过调整个体突触权重来学习 - 这表明一种更简单,更生物学上更合理的学习机制。因此,我们建议,除了建筑简单性外,戴尔的原理还为生物网络赋予了计算和学习优势,并提供了用于构建和培训生物学启发的人工神经网络的新方向
Dale's principle implies that biological neural networks are composed of neurons that are either excitatory or inhibitory. While the number of possible architectures of such Daleian networks is exponentially smaller than non-Daleian ones, the computational and functional implications of using Daleian networks by the brain are mostly unknown. Here, we use models of recurrent spiking neural networks and rate-based networks to show, surprisingly, that despite the structural limitations on Daleian networks, they can approximate the computation performed by non-Daleian networks to a very high degree of accuracy. Moreover, we find that Daleian networks are more functionally robust to synaptic noise. We then show that unlike non-Daleian networks, Daleian ones can learn efficiently by tuning single neuron features, nearly as well as learning by tuning individual synaptic weights - suggesting a simpler and more biologically plausible learning mechanism. We thus suggest that in addition to architectural simplicity, Dale's principle confers computational and learning benefits for biological networks, and offers new directions for constructing and training biologically-inspired artificial neural networks