论文标题

张量的输入的成本效益高斯张量网络嵌入

Cost-efficient Gaussian Tensor Network Embeddings for Tensor-structured Inputs

论文作者

Ma, Linjian, Solomonik, Edgar

论文摘要

这项工作讨论了张量网络嵌入,该网络嵌入是带有张量网络结构的随机矩阵($ s $)。这些嵌入已用于执行张量网络结构输入$ x $的维度降低,并加速应用程序,例如张量分解和内核回归。现有作品设计了用于输入$ x $的嵌入式,并具有特定的结构,以便计算$ sx $的计算成本有效。我们提供了一种系统的设计方法来设计由高斯随机张量组成的张量网络嵌入,以便对于具有更通用的张量网络结构的输入,包括草图大小(行大小为$ s $)和草图计算成本较低。 我们分析了可以简化为一系列草图矩阵的通用张量网络嵌入。我们提供了足够的条件来量化此类嵌入的准确性,并使用满足此条件并具有低于任何输入维度的草图大小的嵌入来得出草图渐近成本下限。然后,我们提供一种算法,以使用此类嵌入有效绘制输入数据。算法中使用的嵌入的草图大小对输入的草图尺寸的数量有线性依赖性。假设张量收缩是通过经典密集的矩阵乘法算法进行的,则该算法在我们成本下限的$ o(\ sqrt {m})$($ m $)的$ o(\ sqrt {m})范围内实现渐近成本,其中$ m $是草图大小。此外,当输入中的每个张量都需要勾勒出一个尺寸时,该算法会产生最佳的草图渐近成本。我们将草图分析应用于不精确的张量分解优化算法。我们为CP分解提供了一种草图算法,该算法在多个制度中比现有工作更快,并显示了现有算法的张量火车舍入的最佳性。

This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low. We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show optimality of an existing algorithm for tensor train rounding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源