论文标题

学习通过反馈和当地可塑性学习

Learning to Learn with Feedback and Local Plasticity

论文作者

Lindsey, Jack, Litwin-Kumar, Ashok

论文摘要

对生物学启发的反向传播替代方案的兴趣是由深度学习与神经科学之间提高联系的愿望以及解决反向传播在在线,持续学习等任务上的缺点的愿望。但是,到目前为止,诸如大脑使用的本地突触学习规则之类的规则未能匹配深网络中的反向传播的性能。在这项研究中,我们采用元学习来发现使用反馈连接和本地生物学启发的学习规则学习的网络。重要的是,反馈连接与进料的重量无关,避免了生物学上难以置信的重量传输。我们的实验表明,元训练的网络有效地使用反馈连接来在多层体系结构中执行在线信贷分配。令人惊讶的是,这种方法与回归和分类任务有关的基于最先进的在线元学习算法匹配或超过了最先进的在线元学习算法,尤其是在持续学习中尤其出色。对这些模型所采用的重量更新的分析表明,它们与梯度下降差异,以减少更新之间的干扰。我们的结果表明,存在一类生物学上合理的学习机制,它们不仅与基于梯度下降的学习相匹配,而且还克服了其局限性。

Interest in biologically inspired alternatives to backpropagation is driven by the desire to both advance connections between deep learning and neuroscience and address backpropagation's shortcomings on tasks such as online, continual learning. However, local synaptic learning rules like those employed by the brain have so far failed to match the performance of backpropagation in deep networks. In this study, we employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules. Importantly, the feedback connections are not tied to the feedforward weights, avoiding biologically implausible weight transport. Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures. Surprisingly, this approach matches or exceeds a state-of-the-art gradient-based online meta-learning algorithm on regression and classification tasks, excelling in particular at continual learning. Analysis of the weight updates employed by these models reveals that they differ qualitatively from gradient descent in a way that reduces interference between updates. Our results suggest the existence of a class of biologically plausible learning mechanisms that not only match gradient descent-based learning, but also overcome its limitations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源