论文标题

重新审视横向预测编码:内部模型,对称性破坏和响应时间

Lateral predictive coding revisited: Internal model, symmetry breaking, and response time

论文作者

Huang, Zhen-Ye, Fan, Xin-Yi, Zhou, Jianwen, Zhou, Hai-Jun

论文摘要

预测编码是神经科学中有希望的理论框架,用于了解信息传递和感知。它认为大脑通过内部模型感知外部世界,并在预测错误的指导下更新这些模型。先前关于预测编码的研究强调了层次多层网络中自上而下的反馈相互作用,但在很大程度上忽略了横向复发的相互作用。在这项工作中,我们对单层横向相互作用的影响进行了分析和数值研究。我们考虑一个简单的预测响应动态,并将其运行在手写数字的MNIST数据集上。我们发现,学习通常会破坏对等神经元之间的相互作用对称性,并且两个神经元之间的高输入相关性并不一定会引起它们之间的强烈直接相互作用。优化的网络对熟悉的输入信号的响应速度要比新颖或随机输入快得多,并且大大降低了神经元对输出状态之间的相关性。

Predictive coding is a promising theoretical framework in neuroscience for understanding information transmission and perception. It posits that the brain perceives the external world through internal models and updates these models under the guidance of prediction errors. Previous studies on predictive coding emphasized top-down feedback interactions in hierarchical multi-layered networks but largely ignored lateral recurrent interactions. We perform analytical and numerical investigations in this work on the effects of single-layer lateral interactions. We consider a simple predictive response dynamics and run it on the MNIST dataset of hand-written digits. We find that learning will generally break the interaction symmetry between peer neurons, and that high input correlation between two neurons does not necessarily bring strong direct interactions between them. The optimized network responds to familiar input signals much faster than to novel or random inputs, and it significantly reduces the correlations between the output states of pairs of neurons.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源