论文标题

隐藏尖峰神经元的介观建模

Mesoscopic modeling of hidden spiking neurons

论文作者

Wang, Shuqi, Schmutz, Valentin, Bellec, Guillaume, Gerstner, Wulfram

论文摘要

我们可以将尖峰神经网络(SNN)用作多神经元记录的生成模型,同时考虑到大多数神经元未观察到?建模具有大量隐藏尖峰神经元池的未观察到的神经元会导致严重的问题,这些问题很难通过最大的似然估计来解决。在这项工作中,我们使用粗晶和平均场近似值来得出自下而上的,神经座的潜在变量模型(NEULVM),其中未观察到的神经元的活性还原为低维介质描述。与以前的潜在变量模型相反,NEULVM可以明确映射到经常性的多人群SNN中,从而使其具有透明的生物学解释。我们在合成的尖峰火车上表明,有一些观察到的神经元足以使NeuLVM进行大型SNN的有效模型反转,从某种意义上说,它可以恢复连通性参数,推断单端的潜在人口活动,重现正在进行的持续的中稳态动态,并在模仿照片探索照片的局限性时进行扰动。

Can we use spiking neural networks (SNN) as generative models of multi-neuronal recordings, while taking into account that most neurons are unobserved? Modeling the unobserved neurons with large pools of hidden spiking neurons leads to severely underconstrained problems that are hard to tackle with maximum likelihood estimation. In this work, we use coarse-graining and mean-field approximations to derive a bottom-up, neuronally-grounded latent variable model (neuLVM), where the activity of the unobserved neurons is reduced to a low-dimensional mesoscopic description. In contrast to previous latent variable models, neuLVM can be explicitly mapped to a recurrent, multi-population SNN, giving it a transparent biological interpretation. We show, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform efficient model inversion of large SNNs, in the sense that it can recover connectivity parameters, infer single-trial latent population activity, reproduce ongoing metastable dynamics, and generalize when subjected to perturbations mimicking photo-stimulation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源