论文标题
内核相似性与Hebbian神经网络匹配
Kernel similarity matching with Hebbian neural networks
论文作者
论文摘要
最近的作品已通过基于在线相关的学习规则得出了神经网络,以执行\ textit {kernel相似性匹配}。这些作品将现有的线性相似性匹配算法应用于随机傅立叶方法生成的非线性特征。在本文中,试图通过直接学习非线性特征来执行内核相似性匹配。我们的算法通过得出并最大程度地减少输出和输入内核相似性之间的平方误差之和的上限来进行。我们的上限的构建导致基于在线相关的学习规则,可以通过1层复发的神经网络实施。除了产生高维线性分离表示外,我们还表明,我们的上限自然产生的表示,对于特定的输入模式而言,稀疏和选择性。我们将方法的近似质量与神经随机傅立叶方法和流行但非生物学的“ NyStr {Ö} M”方法的变体进行了比较,用于近似核基质。当输出相对较低的尺寸(尽管尺寸仍然比输入的尺寸更高)时,我们的方法似乎比随机采样的NyStr {Ö} M可以比较或更好。
Recent works have derived neural networks with online correlation-based learning rules to perform \textit{kernel similarity matching}. These works applied existing linear similarity matching algorithms to nonlinear features generated with random Fourier methods. In this paper attempt to perform kernel similarity matching by directly learning the nonlinear features. Our algorithm proceeds by deriving and then minimizing an upper bound for the sum of squared errors between output and input kernel similarities. The construction of our upper bound leads to online correlation-based learning rules which can be implemented with a 1 layer recurrent neural network. In addition to generating high-dimensional linearly separable representations, we show that our upper bound naturally yields representations which are sparse and selective for specific input patterns. We compare the approximation quality of our method to neural random Fourier method and variants of the popular but non-biological "Nystr{ö}m" method for approximating the kernel matrix. Our method appears to be comparable or better than randomly sampled Nystr{ö}m methods when the outputs are relatively low dimensional (although still potentially higher dimensional than the inputs) but less faithful when the outputs are very high dimensional.