论文标题

在径向各向同性位置:理论和算法

On Radial Isotropic Position: Theory and Algorithms

论文作者

Artstein-Avidan, Shiri, Kaplan, Haim, Sharir, Micha

论文摘要

我们回顾了$ {\ bf r}^d $将有限点转换为\ emph {radial同型位置}集合的有限点的理论,并通过将每个图像点重新缩放到单位球体。这个问题是在计算机科学和数学中广泛应用中引起的。 我们的算法将梯度下降方法用于特定的凸功能$ f $,其最低定义转换,而我们的主要重点是分析其性能。尽管可以通过昂贵的符号代数技术准确地计算最小值,但梯度下降仅将所需的最小值近似于任何所需的准确度。我们表明,计算$ f $的梯度量相当于计算与输入集关联的某些矩阵的单数值分解(SVD),从而使其易于实现。我们认为,它比以前用于查找这种转换的其他近似技术(主要是椭圆形算法)优越,并且在实践中应该更快地运行。 我们证明$ f $很平稳,它产生的收敛速率与$ 1/ε$成正比,其中$ε$是所需的近似准确性。为了完成分析,我们为最佳解决方案的规范提供了上限,该范围取决于测量输入中“退化”的新参数。我们认为,我们的参数比以前的作品中使用的参数更好,比其他看似弱的参数更好。 接下来,我们分析了$ f $的强凸度,并在其Hessian的最小特征值上呈现了两个最差的下限。这给出了另一个最坏情况,这是另一个梯度体面变体的收敛速率的结合,该变体仅取决于$ 1/ε$。

We review the theory of, and develop algorithms for transforming a finite point set in ${\bf R}^d$ into a set in \emph{radial isotropic position} by a nonsingular linear transformation followed by rescaling each image point to the unit sphere. This problem arises in a wide spectrum of applications in computer science and mathematics. Our algorithms use gradient descent methods for a particular convex function $f$ whose minimum defines the transformation, and our main focus is on analyzing their performance. Although the minimum can be computed exactly, by expensive symbolic algebra techniques, gradient descent only approximates the desired minimum to any desired level of accuracy. We show that computing the gradient of $f$ amounts to computing the Singular Value Decomposition (SVD) of a certain matrix associated with the input set, making it simple to implement. We believe it to be superior to other approximate techniques (mainly the ellipsoid algorithm) used previously to find this transformation, and it should run much faster in practice. We prove that $f$ is smooth, which yields convergence rate proportional to $1/ε$, where $ε$ is the desired approximation accuracy. To complete the analysis, we provide upper bounds on the norm of the optimal solution which depend on new parameters measuring "the degeneracy" in our input. We believe that our parameters capture degeneracy better than other, seemingly weaker, parameters used in previous works. We next analyze the strong convexity of $f$, and present two worst-case lower bounds on the smallest eigenvalue of its Hessian. This gives another worst-case bound on the convergence rate of another variant of gradient decent that depends only logarithmically on $1/ε$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源