论文标题
可衡量的学习系统的设计:方法和性能基准测试
Design of Supervision-Scalable Learning Systems: Methodology and Performance Benchmarking
论文作者
论文摘要
在这项工作中,研究了在广泛的监督学位下提供稳定表现的强大学习系统的设计。我们选择图像分类问题作为一个说明性示例,并专注于由三个学习模块组成的模块化系统的设计:表示学习,特征学习和决策学习。我们讨论调整每个模块的方法,以使设计相对于不同的培训样本编号。基于这些想法,我们提出了两个学习系统家庭。一个人采用定向梯度(HOG)特征的经典直方图,而另一个则使用连续的 - 空格学习(SSL)功能。我们针对MNIST和时尚 - 纳斯特数据集的Lenet-5测试了他们对LENET-5的性能。每个图像类别类别的训练样本数量从极度弱的监督状况(即,每类标记为1个标记的样本)到强大的监督状况(即4096个标记为每类标记的样本),并在之间逐渐过渡(即$ 2^n $,$ n = 0,1,1,1,1,1,\ cdots,12 $)。实验结果表明,模块化学习系统的两个家族比Lenet-5具有更强的性能。对于小$ n $,它们都超过了Lenet-5的优于Lenet-5,并且与Lenet-5的性能相当。
The design of robust learning systems that offer stable performance under a wide range of supervision degrees is investigated in this work. We choose the image classification problem as an illustrative example and focus on the design of modularized systems that consist of three learning modules: representation learning, feature learning and decision learning. We discuss ways to adjust each module so that the design is robust with respect to different training sample numbers. Based on these ideas, we propose two families of learning systems. One adopts the classical histogram of oriented gradients (HOG) features while the other uses successive-subspace-learning (SSL) features. We test their performance against LeNet-5, which is an end-to-end optimized neural network, for MNIST and Fashion-MNIST datasets. The number of training samples per image class goes from the extremely weak supervision condition (i.e., 1 labeled sample per class) to the strong supervision condition (i.e., 4096 labeled sample per class) with gradual transition in between (i.e., $2^n$, $n=0, 1, \cdots, 12$). Experimental results show that the two families of modularized learning systems have more robust performance than LeNet-5. They both outperform LeNet-5 by a large margin for small $n$ and have performance comparable with that of LeNet-5 for large $n$.