论文标题
嵌入深度双线性交互式信息和多视图学习的选择性融合
Embedded Deep Bilinear Interactive Information and Selective Fusion for Multi-view Learning
论文作者
论文摘要
作为多视图学习的具体应用,多视图分类可以通过最佳地集成各种观点来显着改善传统分类方法。尽管以前的大多数努力都证明了多视图学习的优势,但可以通过全面嵌入更强大的跨视图交互信息和更可靠的多视图融合策略来进一步改善它。为了实现这一目标,我们提出了一个新颖的多视图学习框架,以使多视图分类更好地针对上述两个方面。也就是说,我们将各种视图内信息,跨视图多维双线性交互信息以及新的视图集合机制无缝嵌入到统一的框架中,以通过优化做出决定。特别是,我们训练不同的深神经网络以学习各种视图内表示,然后通过视图之间的双线性函数动态地学习来自不同双线性相似性的多维双线性交互信息。之后,我们通过灵活地调整View-Weight的参数来适应多个视图的表示形式,该参数不仅避免了琐碎的权重解决方案,而且还提供了一种新的方法来选择一些有益于对多视视分类做出决定的区分视图。对六个公开数据集进行的广泛实验证明了该方法的有效性。
As a concrete application of multi-view learning, multi-view classification improves the traditional classification methods significantly by integrating various views optimally. Although most of the previous efforts have been demonstrated the superiority of multi-view learning, it can be further improved by comprehensively embedding more powerful cross-view interactive information and a more reliable multi-view fusion strategy in intensive studies. To fulfill this goal, we propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects. That is, we seamlessly embed various intra-view information, cross-view multi-dimension bilinear interactive information, and a new view ensemble mechanism into a unified framework to make a decision via the optimization. In particular, we train different deep neural networks to learn various intra-view representations, and then dynamically learn multi-dimension bilinear interactive information from different bilinear similarities via the bilinear function between views. After that, we adaptively fuse the representations of multiple views by flexibly tuning the parameters of the view-weight, which not only avoids the trivial solution of weight but also provides a new way to select a few discriminative views that are beneficial to make a decision for the multi-view classification. Extensive experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.