论文标题
对Riemannian歧管的不受约束优化
Unconstrained optimisation on Riemannian manifolds
论文作者
论文摘要
在本文中,我们对(本地)回溯梯度下降和新的Q-Newton的方法进行明确描述。在本文中,x是有限的维度和$ f:x \ rightarrow的\ rightarrow \ mathbb {Riemantian c^$ C^22 $ c^22 $ c^2222.非脱位)。 {\ bf定理。}用于在Riemanian本地回溯梯度下降算法中随机选择超参数,以及对于初始点$ x_0 $的随机选择,序列$ \ \ {x_n \} $由Algorithm of Algorithm既有$ f $ f of $ f of $ f of $ f s of $ f of $ f s of(ii)x $(ii)最终(x)最终(x),最终(ii)最终(II)(II)(II)(x)最终(II),x_0 $(II)(II)(II)(x)在$ x $上分开到无穷大)。如果$ f $具有紧凑的Sublevels,则只会发生以前的选择。收敛率与Armijo的古典论文中相同。 {\ bf定理。}假设$ f $是$ c^3 $。对于Riemannian New Q-Newton方法中超参数的随机选择,如果算法构建的序列收敛,则极限是$ f $的关键点。对于与算法相关的动力系统,我们有一个本地稳定的中心歧管定理,接近$ f $的马鞍点。如果极限点是非分级最小点,则收敛速率为二次。如果此外,$ x $是Lie组的一个开放子集,而初始点$ X_0 $随机选择,那么我们可以在全球范围内避免鞍点。 作为应用程序,我们提出了一种使用Riemannian回溯GD的通用方法,以在欧几里得空间中的有界球上找到最小的功能,并进行明确的计算,以计算对称平方矩阵的最小特征值。
In this paper, we give explicit descriptions of versions of (Local-) Backtracking Gradient Descent and New Q-Newton's method to the Riemannian setting.Here are some easy to state consequences of results in this paper, where X is a general Riemannian manifold of finite dimension and $f:X\rightarrow \mathbb{R}$ a $C^2$ function which is Morse (that is, all its critical points are non-degenerate). {\bf Theorem.} For random choices of the hyperparameters in the Riemanian Local Backtracking Gradient Descent algorithm and for random choices of the initial point $x_0$, the sequence $\{x_n\}$ constructed by the algorithm either (i) converges to a local minimum of $f$ or (ii) eventually leaves every compact subsets of $X$ (in other words, diverges to infinity on $X$). If $f$ has compact sublevels, then only the former alternative happens. The convergence rate is the same as in the classical paper by Armijo. {\bf Theorem.} Assume that $f$ is $C^3$. For random choices of the hyperparametes in the Riemannian New Q-Newton's method, if the sequence constructed by the algorithm converges, then the limit is a critical point of $f$. We have a local Stable-Center manifold theorem, near saddle points of $f$, for the dynamical system associated to the algorithm. If the limit point is a non-degenerate minimum point, then the rate of convergence is quadratic. If moreover $X$ is an open subset of a Lie group and the initial point $x_0$ is chosen randomly, then we can globally avoid saddle points. As an application, we propose a general method using Riemannian Backtracking GD to find minimum of a function on a bounded ball in a Euclidean space, and do explicit calculations for calculating the smallest eigenvalue of a symmetric square matrix.