论文标题
概念分解的调查:从浅层到深度代表学习
A Survey on Concept Factorization: From Shallow to Deep Representation Learning
论文作者
论文摘要
通过表示学习的学习特征的质量决定了学习算法和相关应用程序任务的性能(例如高维数据聚类)。作为代表性学习的相对较新的范式,概念分解(CF)在十多年来都吸引了机器学习和数据挖掘领域的极大兴趣。基于不同的观点和特性,已经提出了许多有效的基于CF的方法,但请注意,掌握基本联系并找出退出研究的基本解释因素仍然不容易。因此,在本文中,我们通过对当前方法进行分类和总结,调查了CF方法和潜在基准的最新进展。具体来说,我们首先重新查看根CF方法,然后探索从浅层到深/多层案例的基于CF的表示的进步。我们还介绍了基于CF的方法的潜在应用领域。最后,我们指出了研究基于CF的表示学习的一些未来方向。总体而言,这项调查提供了CF领域的理论基础和当前发展的深入概述,这也可以帮助感兴趣的研究人员了解CF的当前趋势,并找到处理特定应用程序的最合适的CF技术。
The quality of learned features by representation learning determines the performance of learning algorithms and the related application tasks (such as high-dimensional data clustering). As a relatively new paradigm for representation learning, Concept Factorization (CF) has attracted a great deal of interests in the areas of machine learning and data mining for over a decade. Lots of effective CF based methods have been proposed based on different perspectives and properties, but note that it still remains not easy to grasp the essential connections and figure out the underlying explanatory factors from exiting studies. In this paper, we therefore survey the recent advances on CF methodologies and the potential benchmarks by categorizing and summarizing the current methods. Specifically, we first re-view the root CF method, and then explore the advancement of CF-based representation learning ranging from shallow to deep/multilayer cases. We also introduce the potential application areas of CF-based methods. Finally, we point out some future directions for studying the CF-based representation learning. Overall, this survey provides an insightful overview of both theoretical basis and current developments in the field of CF, which can also help the interested researchers to understand the current trends of CF and find the most appropriate CF techniques to deal with particular applications.