论文标题

卷积神经网络的分析学习用于模式识别

Analytic Learning of Convolutional Neural Network For Pattern Recognition

论文作者

Zhuang, Huiping, Lin, Zhiping, Yang, Yimin, Toh, Kar-Ann

论文摘要

带有背部传播(BP)的培训卷积神经网络(CNN)是时必的且资源密集的,尤其是考虑到需要多次访问数据集。相反,分析学习尝试在一个时期获得权重。但是,现有的尝试分析学习仅考虑了多层感知器(MLP)。在本文中,我们提出了一个分析性卷积神经网络学习(ACNNL)。从理论上讲,我们表明ACNN构建了类似于其MLP对应物的封闭式解决方案,但其正规化约束有所不同。因此,我们能够在一定程度上回答为什么CNN通常从隐式正规化的角度概括MLP。通过在几个基准数据集上执行分类任务来验证ACNN。令人鼓舞的是,ACNNL以相当快的方式训练CNN,对使用BP的人的预测准确性合理。此外,当训练数据稀缺或昂贵时,我们的实验在小样本方案下揭示了ACNN的独特优势。

Training convolutional neural networks (CNNs) with back-propagation (BP) is time-consuming and resource-intensive particularly in view of the need to visit the dataset multiple times. In contrast, analytic learning attempts to obtain the weights in one epoch. However, existing attempts to analytic learning considered only the multilayer perceptron (MLP). In this article, we propose an analytic convolutional neural network learning (ACnnL). Theoretically we show that ACnnL builds a closed-form solution similar to its MLP counterpart, but differs in their regularization constraints. Consequently, we are able to answer to a certain extent why CNNs usually generalize better than MLPs from the implicit regularization point of view. The ACnnL is validated by conducting classification tasks on several benchmark datasets. It is encouraging that the ACnnL trains CNNs in a significantly fast manner with reasonably close prediction accuracies to those using BP. Moreover, our experiments disclose a unique advantage of ACnnL under the small-sample scenario when training data are scarce or expensive.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源