论文标题

元学习简单神经回路的感应偏见

Meta-Learning the Inductive Biases of Simple Neural Circuits

论文作者

Dorrell, William, Yuffa, Maria, Latham, Peter

论文摘要

培训数据始终是有限的,因此尚不清楚如何推广到看不见的情况。但是,动物确实概括了,挥舞着Occam的剃须刀,以选择对其观察的简约解释。他们如何做到这一点被称为他们的归纳偏见,它隐含地内置在动物神经回路的操作中。观察到的电路及其感应偏置之间的这种关系是神经科学的有用解释窗口,可以规范地理解设计选择。但是,通常很难将电路结构映射到诱导偏置。在这里,我们提出了一个神经网络工具来弥合这一差距。该工具元学习通过学习神经回路可以易于推广的功能来归纳偏差,因为易于概括的功能正是电路选择解释不完整数据的功能。在具有分析已知的归纳偏差(即线性和内核回归)的系统中,我们的工具会恢复它。通常,我们表明它可以灵活地从监督学习者(包括尖峰神经网络)中提取感应偏见,并显示如何将其应用于真实动物。最后,我们使用我们的工具来解释最近的连接数据,以说明其预期用途:通过产生的电感偏置了解电路特征的作用。

Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam's razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals' neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating its intended use: understanding the role of circuit features through the resulting inductive bias.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源