论文标题

欧几里得 - - 纳米诱导的schatten-p Quasi-norm正则化,用于低级张量完成和张量鲁棒的主成分分析

Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization for Low-Rank Tensor Completion and Tensor Robust Principal Component Analysis

论文作者

Fan, Jicong, Ding, Lijun, Yang, Chengrun, Zhang, Zhao, Udell, Madeleine

论文摘要

核标准和沙滕 - $ p $ quasi-Norm是低级矩阵恢复中受欢迎的排名代理。但是,在理论和实践中,计算张量的核标准或schatten-$ p $ quasi-norm都是很难的,阻碍了它们在低级数张量完成(LRTC)(LRTC)和张量强大的主成分分析(TRPCA)中的应用。在本文中,我们根据张量的CP组件向量的欧几里得规范提出了一类新的张量级正规化器,并表明这些正则化是张量schatten-$ p $ quasi-norm的单调转换。该连接使我们能够通过组件向量隐式地将LRTC和TRPCA中的Schatten-$ p $ quasi-norm最小化。该方法缩放到大张量,并与核定标准相比,为低量张量回收率提供任意尖锐的等级代理。另一方面,我们使用Schatten-$ p $ quasi-norm正规器和LRTC研究了LRTC的概括能力。该定理表明,相对较清晰的正常化程序会导致误差更严格,这与我们的数值结果一致。特别是,我们证明,对于$ d $ - 订单张量的LRTC,在$ d $ dorder张量上使用Schatten-$ p $ quasi-norm正常制剂,$ p = 1/d $总是比任何$ p> 1/d $在概括能力方面都要好。我们还提供了一个恢复错误,以验证小$ p $在schatten-$ p $ quasi-norm中的有用性。合成数据和实际数据的数值结果证明了正则化方法和定理的有效性。

The nuclear norm and Schatten-$p$ quasi-norm are popular rank proxies in low-rank matrix recovery. However, computing the nuclear norm or Schatten-$p$ quasi-norm of a tensor is hard in both theory and practice, hindering their application to low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). In this paper, we propose a new class of tensor rank regularizers based on the Euclidean norms of the CP component vectors of a tensor and show that these regularizers are monotonic transformations of tensor Schatten-$p$ quasi-norm. This connection enables us to minimize the Schatten-$p$ quasi-norm in LRTC and TRPCA implicitly via the component vectors. The method scales to big tensors and provides an arbitrarily sharper rank proxy for low-rank tensor recovery compared to the nuclear norm. On the other hand, we study the generalization abilities of LRTC with the Schatten-$p$ quasi-norm regularizer and LRTC with the proposed regularizers. The theorems show that a relatively sharper regularizer leads to a tighter error bound, which is consistent with our numerical results. Particularly, we prove that for LRTC with Schatten-$p$ quasi-norm regularizer on $d$-order tensors, $p=1/d$ is always better than any $p>1/d$ in terms of the generalization ability. We also provide a recovery error bound to verify the usefulness of small $p$ in the Schatten-$p$ quasi-norm for TRPCA. Numerical results on synthetic data and real data demonstrate the effectiveness of the regularization methods and theorems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源