论文标题
半监督字典学习,图形正则化和活动点
Semi-supervised dictionary learning with graph regularization and active points
论文作者
论文摘要
在最近的十年中,有监督的词典学习引起了人们的极大兴趣,并且在图像分类方面已显示出显着的性能改善。但是,总的来说,监督的学习需要每个课程的大量标签样本才能获得可接受的结果。为了处理每个类别只有几个标记样本的数据库,半监督学习,这些学习还使用了训练阶段中未标记的样本。实际上,未标记的样本可以帮助使学习模型正规化,从而提高了分类精度。在本文中,我们提出了一种基于两个支柱的新的半监督字典学习方法:一方面,我们使用本地线性嵌入将原始数据的歧管结构保存到稀疏的代码空间中,可以将其视为稀疏代码的正则化;另一方面,我们在稀疏的代码空间中训练半监督分类器。我们表明,我们的方法对最先进的半监督词典学习方法有了改进。
Supervised Dictionary Learning has gained much interest in the recent decade and has shown significant performance improvements in image classification. However, in general, supervised learning needs a large number of labelled samples per class to achieve an acceptable result. In order to deal with databases which have just a few labelled samples per class, semi-supervised learning, which also exploits unlabelled samples in training phase is used. Indeed, unlabelled samples can help to regularize the learning model, yielding an improvement of classification accuracy. In this paper, we propose a new semi-supervised dictionary learning method based on two pillars: on one hand, we enforce manifold structure preservation from the original data into sparse code space using Locally Linear Embedding, which can be considered a regularization of sparse code; on the other hand, we train a semi-supervised classifier in sparse code space. We show that our approach provides an improvement over state-of-the-art semi-supervised dictionary learning methods.