论文标题
有效的最小二乘算法,用于低张量的低多线性等级近似
Efficient Alternating Least Squares Algorithms for Low Multilinear Rank Approximation of Tensors
论文作者
论文摘要
低多线性秩近似,也称为截短的塔克分解,已在许多涉及高阶张量的应用中广泛使用。低多线性等级近似的流行方法通常直接依赖于矩阵SVD,因此通常会遭受臭名昭著的中间数据爆炸问题,并且不容易并行化,尤其是当输入张量很大时。在本文中,我们提出了一类新的截短的HOSVD算法,基于交替的最小二乘(ALS),以有效地计算张量的低多线性秩近似值。所提出的基于ALS的方法能够消除中间矩阵的奇异向量的冗余计算,因此没有数据爆炸。同样,新方法具有可调节的收敛公差,并且在高性能计算机上本质上是可行的。理论分析表明,所提出的算法中的ALS迭代是Q线性收敛的,其收敛区域相对较宽。来自合成和现实世界应用的大规模张量的数值实验表明,基于ALS的方法可以大大降低原始成本的总成本,并且对于并行计算高度可扩展。
The low multilinear rank approximation, also known as the truncated Tucker decomposition, has been extensively utilized in many applications that involve higher-order tensors. Popular methods for low multilinear rank approximation usually rely directly on matrix SVD, therefore often suffer from the notorious intermediate data explosion issue and are not easy to parallelize, especially when the input tensor is large. In this paper, we propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors. The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.