论文标题

强大的自我监督卷积神经网络用于子空间聚类和分类

Robust Self-Supervised Convolutional Neural Network for Subspace Clustering and Classification

论文作者

Sitnik, Dario, Kopriva, Ivica

论文摘要

现有的子空间聚类方法不足以处理来自非线性歧管,数据损坏和样本数据的数据的能力,从而阻碍了其适用于解决现实世界集群和分类问题的适用性。本文提出了自我监督的卷积子空间聚类网络($ s^2 $ crevscn)的强大公式,该卷积子空间聚类网络($ s^2 $ crevscn)结合了完全连接的(FC)层,因此,它可以通过使用SoftMax Classifier对它们进行分类来处理样本外数据。 $ s^2 $ convscn群集数据来自非线性歧管,通过学习特征空间中的线性自代模型。通过使用误差的Correntropy诱导的度量(CIM)来实现数据损坏的鲁棒性。此外,表示矩阵的块对基(BD)结构是通过BD正则化明确执行的。在一个真正无监督的培训环境中,强劲的$ S^2 $ COVSSCN优于其基线版本,在四个知名数据集中看到和看不见的数据都大量含量。可以说,此类消融研究以前尚未报告。

Insufficient capability of existing subspace clustering methods to handle data coming from nonlinear manifolds, data corruptions, and out-of-sample data hinders their applicability to address real-world clustering and classification problems. This paper proposes the robust formulation of the self-supervised convolutional subspace clustering network ($S^2$ConvSCN) that incorporates the fully connected (FC) layer and, thus, it is capable for handling out-of-sample data by classifying them using a softmax classifier. $S^2$ConvSCN clusters data coming from nonlinear manifolds by learning the linear self-representation model in the feature space. Robustness to data corruptions is achieved by using the correntropy induced metric (CIM) of the error. Furthermore, the block-diagonal (BD) structure of the representation matrix is enforced explicitly through BD regularization. In a truly unsupervised training environment, Robust $S^2$ConvSCN outperforms its baseline version by a significant amount for both seen and unseen data on four well-known datasets. Arguably, such an ablation study has not been reported before.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源