论文标题
非局部随机神经元动力学的自主学习
Autonomous learning of nonlocal stochastic neuron dynamics
论文作者
论文摘要
神经元动力学是由外部施加或内部产生的随机激发/噪声驱动的,并且通常由随机或随机的普通微分方程系统描述。这样的系统允许解决方案的分布,该解决方案的特征是系统状态的单时间关节概率密度函数(PDF)。它可用于计算此类信息理论量,例如随机刺激与神经元的各种内部状态之间的相互信息(例如膜电位)以及各种尖峰统计。当将随机激发建模为高斯白噪声时,神经元状态的关节PDF完全满足fokker-planck方程。但是,大多数生物学上合理的噪声源是相关的(有色)。在这种情况下,所得的PDF方程需要闭合近似值。我们提出了两种关闭此类方程式的方法:修改的非本地大涡流 - 扩散性闭合和数据驱动的闭合,依靠稀疏回归来学习相关特征。测试了封闭的随机非加速器泄漏的集成和火,以及由正弦噪声驱动的Fitzhugh-Nagumo(FHN)神经元。计算FHN神经元的随机刺激与内部状态之间的互信息和总相关性。
Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of random or stochastic ordinary differential equations. Such systems admit a distribution of solutions, which is (partially) characterized by the single-time joint probability density function (PDF) of system states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for the stochastic non-spiking leaky integrate-and-fire and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.