论文标题

多视图矫正部分最小二乘:正规化和深度扩展

Multi-view Orthonormalized Partial Least Squares: Regularizations and Deep Extensions

论文作者

Wang, Li, Li, Ren-Cang, Wen-Wei

论文摘要

我们建立了一种基于子空间的学习方法,用于使用最小二乘作为基本基础的多视图学习。具体而言,我们研究了矫正部分最小二乘(OPL),并研究其重要特性,用于多元回归和分类。在OPL的最小二乘重新印象的基础上,我们提出了一个统一的多视图学习框架,以在所有视图共享的常见潜在空间上学习分类器。通过在其固有成分(包括模型参数,决策值和潜在的投影点)上提供三种通用类型的正规化器,可以进一步利用正则化技术来释放提出的框架的功能。我们根据各种先验来实例化一组正规化器。提议的框架具有适当选择的正规化器,不仅可以重塑现有方法,还可以激发新模型。为了进一步提高提议的框架在复杂的实际问题上的性能,我们建议学习通过深网络参数化的非线性转换。进行了广泛的实验,以在特征提取和跨模式检索方面比较具有不同视图的九个数据集的各种方法。

We establish a family of subspace-based learning method for multi-view learning using the least squares as the fundamental basis. Specifically, we investigate orthonormalized partial least squares (OPLS) and study its important properties for both multivariate regression and classification. Building on the least squares reformulation of OPLS, we propose a unified multi-view learning framework to learn a classifier over a common latent space shared by all views. The regularization technique is further leveraged to unleash the power of the proposed framework by providing three generic types of regularizers on its inherent ingredients including model parameters, decision values and latent projected points. We instantiate a set of regularizers in terms of various priors. The proposed framework with proper choices of regularizers not only can recast existing methods, but also inspire new models. To further improve the performance of the proposed framework on complex real problems, we propose to learn nonlinear transformations parameterized by deep networks. Extensive experiments are conducted to compare various methods on nine data sets with different numbers of views in terms of both feature extraction and cross-modal retrieval.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源