论文标题

使用运算符标准错误减少秩回归

Reduced-Rank Regression with Operator Norm Error

论文作者

Kacham, Praneeth, Woodruff, David P.

论文摘要

常见的数据分析任务是减少的秩回归问题:$$ \ min _ {\ textrm {rank-} k \ x} \ | ax-b \ |,$ a \ in $ a \ in \ mathbb {r}^{r}^{n \ times c} $ \ | \ cdot \ | $是一定的规范。在这里,未知的矩阵$ x \ in \ mathbb {r}^{c \ times d} $被限制为等级$ k $,因为当$ c $和$ d $较大时,它会导致解决方案的显着降低。在Frobenius Norm错误的情况下,此问题有标准的封闭式解决方案,并且是一种快速算法来找到$(1+ \ Varepsilon)$ - 近似解决方案。但是,对于运算符的重要情况,尚无封闭形式的解决方案,并且已知最快的算法需要单个值分解时间。 我们给出有关此问题在时间$$(nnz {(a)} + nnz {((b)} + c^2)\ cdot k/\ varepsilon^{1.5} +(n + d)k^2/ε + c^ω,$ polylogarith intripry intrix ins in and intriagarigarigarith的数字上,我们给出了第一个随机算法。 $ 1/\ varepsilon $。这里$ nnz {(m)} $表示矩阵$ m $的非零条目的数量,而$ω$是矩阵乘法的指数。由于(1)频谱低级近似($ a = b $)和(2)线性系统求解($ n = c $和$ d = 1 $)都是特殊情况,因此我们的时间无法在没有线性algaterabra中的重大突破性的情况下,不超过$ 1/\ varepsilon $ factor(to Polylogarithmic因素)。有趣的是,低级近似的已知技术,例如交替的最小化或素描和解决方案,证明这是此问题的失败。取而代之的是,我们的算法使用溶液的存在表征,以及Krylov方法,低度多项式近似和基于草图的预处理。

A common data analysis task is the reduced-rank regression problem: $$\min_{\textrm{rank-}k \ X} \|AX-B\|,$$ where $A \in \mathbb{R}^{n \times c}$ and $B \in \mathbb{R}^{n \times d}$ are given large matrices and $\|\cdot\|$ is some norm. Here the unknown matrix $X \in \mathbb{R}^{c \times d}$ is constrained to be of rank $k$ as it results in a significant parameter reduction of the solution when $c$ and $d$ are large. In the case of Frobenius norm error, there is a standard closed form solution to this problem and a fast algorithm to find a $(1+\varepsilon)$-approximate solution. However, for the important case of operator norm error, no closed form solution is known and the fastest known algorithms take singular value decomposition time. We give the first randomized algorithms for this problem running in time $$(nnz{(A)} + nnz{(B)} + c^2) \cdot k/\varepsilon^{1.5} + (n+d)k^2/ε+ c^ω,$$ up to a polylogarithmic factor involving condition numbers, matrix dimensions, and dependence on $1/\varepsilon$. Here $nnz{(M)}$ denotes the number of non-zero entries of a matrix $M$, and $ω$ is the exponent of matrix multiplication. As both (1) spectral low rank approximation ($A = B$) and (2) linear system solving ($n = c$ and $d = 1$) are special cases, our time cannot be improved by more than a $1/\varepsilon$ factor (up to polylogarithmic factors) without a major breakthrough in linear algebra. Interestingly, known techniques for low rank approximation, such as alternating minimization or sketch-and-solve, provably fail for this problem. Instead, our algorithm uses an existential characterization of a solution, together with Krylov methods, low degree polynomial approximation, and sketching-based preconditioning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源