论文标题
非凸优化的单个循环高斯同质法
Single Loop Gaussian Homotopy Method for Non-convex Optimization
论文作者
论文摘要
高斯同型方法(GH)方法是一种流行的方法,可以通过逐渐减少参数值$ t $来找到非凸优化问题的更好的固定点,这将要解决的问题从几乎凸起的问题变为原始目标。现有的基于GH的方法反复调用迭代优化求解器,每次更新$ t $时,都会找到一个固定点,从而产生高计算成本。我们为GH方法(SLGH)提出了一个新颖的单回路框架,该框架更新参数$ t $和相同的优化决策变量。在各种情况下对SLGH算法进行计算复杂性分析:确定性和随机设置都可以获得GH函数的梯度或无梯度甲骨文。 SLGH与调谐超参数的收敛率与梯度下降的收敛速率一致,即使由于$ t $,要解决的问题逐渐更改。在数值实验中,我们的SLGH算法比现有的双环GH方法显示出更快的收敛速度,而在找到更好的解决方案方面,基于梯度下降的方法优于基于梯度下降的方法。
The Gaussian homotopy (GH) method is a popular approach to finding better stationary points for non-convex optimization problems by gradually reducing a parameter value $t$, which changes the problem to be solved from an almost convex one to the original target one. Existing GH-based methods repeatedly call an iterative optimization solver to find a stationary point every time $t$ is updated, which incurs high computational costs. We propose a novel single loop framework for GH methods (SLGH) that updates the parameter $t$ and the optimization decision variables at the same. Computational complexity analysis is performed on the SLGH algorithm under various situations: either a gradient or gradient-free oracle of a GH function can be obtained for both deterministic and stochastic settings. The convergence rate of SLGH with a tuned hyperparameter becomes consistent with the convergence rate of gradient descent, even though the problem to be solved is gradually changed due to $t$. In numerical experiments, our SLGH algorithms show faster convergence than an existing double loop GH method while outperforming gradient descent-based methods in terms of finding a better solution.