论文标题
组合神经代码的有效,概率分析
Efficient, probabilistic analysis of combinatorial neural codes
论文作者
论文摘要
人工和生物神经网络(ANN和BNNS)可以以单个神经元活动组合的形式编码输入。这些组合神经代码由于其高维和大量数据而引起的直接有效分析的计算挑战。在这里,我们改善了先前应用于较小示例的直接代数方法的计算复杂性(从阶乘到二次时间),并将其应用于实验产生的大型神经代码。这些方法提供了一种新颖而有效的方法,用于探测组合神经代码的代数,几何和拓扑特征,并提供有关此类特征与神经网络中的学习和经验如何相关的见解。我们介绍了使用信息几何形状对神经代码的固有特征进行假设检验的程序。然后,我们将这些方法应用于ANN的神经活动,以进行图像分类和2D导航的BNN,而无需观察任何输入或输出,就可以估计刺激或任务空间的结构和维度。此外,我们演示了ANN在网络深度和学习过程中如何改变其内部表示。
Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons' activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity -- from factorial to quadratic time -- of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning.