论文标题
线性张量投影揭示非线性
Linear Tensor Projection Revealing Nonlinearity
论文作者
论文摘要
降低降低是学习高维数据的有效方法,可以更好地了解人类可读的低维子空间中的决策边界。线性方法,例如主成分分析和线性判别分析,使捕获许多变量之间的相关性成为可能。但是,不能保证可以捕获预测数据重要的相关性。此外,如果决策边界具有强大的非线性,则保证将变得越来越困难。当数据是代表变量之间关系的矩阵或张量时,此问题会加剧。我们提出了一种学习方法,该方法搜索一个子空间,即使子空间中的预测模型具有很强的非线性,在保留尽可能多的原始数据信息的同时,可以最大程度地提高预测准确性。这使解释用户想知道的预测问题背后的变量组机制变得更加容易。我们通过将其应用于包括矩阵和张量在内的各种数据来展示我们的方法的有效性。
Dimensionality reduction is an effective method for learning high-dimensional data, which can provide better understanding of decision boundaries in human-readable low-dimensional subspace. Linear methods, such as principal component analysis and linear discriminant analysis, make it possible to capture the correlation between many variables; however, there is no guarantee that the correlations that are important in predicting data can be captured. Moreover, if the decision boundary has strong nonlinearity, the guarantee becomes increasingly difficult. This problem is exacerbated when the data are matrices or tensors that represent relationships between variables. We propose a learning method that searches for a subspace that maximizes the prediction accuracy while retaining as much of the original data information as possible, even if the prediction model in the subspace has strong nonlinearity. This makes it easier to interpret the mechanism of the group of variables behind the prediction problem that the user wants to know. We show the effectiveness of our method by applying it to various types of data including matrices and tensors.