论文标题
深度张量CCA用于多视图学习
Deep Tensor CCA for Multi-view Learning
论文作者
论文摘要
我们提出了深度张量的规范相关分析(DTCCA),这是一种学习多种视图(超过两个)数据的复杂非线性转换的方法,以使所得的表示以高阶线性相关。给定多个视图的高阶相关性是通过协方差张量建模的,协方差张量与大多数仅依赖于成对相关性的CCA公式不同。通过最大化高阶规范相关性,可以共同学习每种视图的转换参数。为了解决最佳的问题,我们将其重新将其重新制定为Rank-1近似值的最佳总和,可以通过现有张量分解方法有效地解决。 DTCCA是通过深网的张量CCA(TCCA)的非线性扩展。 DTCCA的转换是参数函数,它与内核函数形式的隐式映射截然不同。与内核TCCA相比,DTCCA不仅可以处理输入数据的任意维度,而且不需要维护任何给定数据点的计算计算表示的培训数据。因此,DTCCA作为统一模型可以有效地克服高维多视图数据或大量视图的TCCA的可扩展问题,并且它自然会扩展TCCA以学习非线性表示。对三个多视图数据集的广泛实验证明了该方法的有效性。
We present Deep Tensor Canonical Correlation Analysis (DTCCA), a method to learn complex nonlinear transformations of multiple views (more than two) of data such that the resulting representations are linearly correlated in high order. The high-order correlation of given multiple views is modeled by covariance tensor, which is different from most CCA formulations relying solely on the pairwise correlations. Parameters of transformations of each view are jointly learned by maximizing the high-order canonical correlation. To solve the resulting problem, we reformulate it as the best sum of rank-1 approximation, which can be efficiently solved by existing tensor decomposition method. DTCCA is a nonlinear extension of tensor CCA (TCCA) via deep networks. The transformations of DTCCA are parametric functions, which are very different from implicit mapping in the form of kernel function. Comparing with kernel TCCA, DTCCA not only can deal with arbitrary dimensions of the input data, but also does not need to maintain the training data for computing representations of any given data point. Hence, DTCCA as a unified model can efficiently overcome the scalable issue of TCCA for either high-dimensional multi-view data or a large amount of views, and it also naturally extends TCCA for learning nonlinear representation. Extensive experiments on three multi-view data sets demonstrate the effectiveness of the proposed method.