论文标题

关于随机梯度下降的收敛性,低级别投影凸低级矩阵问题

On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

论文作者

Garber, Dan

论文摘要

我们重新审视使​​用随机梯度下降(SGD)来解决凸优化问题,这些问题是许多重要的低级矩阵恢复问题,例如\ textIt {矩阵完成},\ textit {phase retrieval}等。将SGD应用于大规模解决这些松弛的计算限制是需要在每次迭代中计算潜在的高级别奇异值分解(SVD),以便执行低级别的促进约束。我们首先要考虑一种简单自然的条件,以便这些放松确实可以接受低级解决方案。对于某些低级稳定性的概念也是必要的。我们的主要结果表明,在这种情况下,涉及梯度向量的特征值在最佳点,带有迷你批次的SGD,当以“温暖启动”点初始化时,会产生低率的迭代,其可能性很高,因此在每次迭代时仅需要一个低级别的SVD计算。这表明SGD实际上实际上可能适用于解决低级别基质恢复问题的大规模凸松弛。我们的理论结果伴随着支持初步的经验证据。作为附带的好处,我们的分析非常简单且简短。

We revisit the use of Stochastic Gradient Descent (SGD) for solving convex optimization problems that serve as highly popular convex relaxations for many important low-rank matrix recovery problems such as \textit{matrix completion}, \textit{phase retrieval}, and more. The computational limitation of applying SGD to solving these relaxations in large-scale is the need to compute a potentially high-rank singular value decomposition (SVD) on each iteration in order to enforce the low-rank-promoting constraint. We begin by considering a simple and natural sufficient condition so that these relaxations indeed admit low-rank solutions. This condition is also necessary for a certain notion of low-rank-robustness to hold. Our main result shows that under this condition which involves the eigenvalues of the gradient vector at optimal points, SGD with mini-batches, when initialized with a "warm-start" point, produces iterates that are low-rank with high probability, and hence only a low-rank SVD computation is required on each iteration. This suggests that SGD may indeed be practically applicable to solving large-scale convex relaxations of low-rank matrix recovery problems. Our theoretical results are accompanied with supporting preliminary empirical evidence. As a side benefit, our analysis is quite simple and short.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源