论文标题
使用定向近端方法不受约束的优化
Unconstrained optimization using the directional proximal point method
论文作者
论文摘要
本文提出了定向近端方法(DPPM),以得出任何C1平滑函数的最小值f。所提出的方法需要在任何非关键点沿下降方向沿局部凸段的函数持续存在(在该点处指的是DLC方向)。提出的DPPM可以通过求解二维二次优化问题来确定DLC方向,而与函数变量的尺寸无关。沿着该方向,DPPM通过解决一维优化问题来更新。在处理大规模问题时,这给出了与竞争方法相比的DPPM优势,涉及大量变量。我们表明DPPM收敛到F的临界点。我们还提供了整个DPPM序列收敛到单个临界点的条件。对于强凸二次函数,我们证明了误差序列收敛到零的速率,无论变量的尺寸如何。
This paper presents a directional proximal point method (DPPM) to derive the minimum of any C1-smooth function f. The proposed method requires a function persistent a local convex segment along the descent direction at any non-critical point (referred to a DLC direction at the point). The proposed DPPM can determine a DLC direction by solving a two-dimensional quadratic optimization problem, regardless of the dimensionally of the function variables. Along that direction, the DPPM then updates by solving a one-dimensional optimization problem. This gives the DPPM advantage over competitive methods when dealing with large-scale problems, involving a large number of variables. We show that the DPPM converges to critical points of f. We also provide conditions under which the entire DPPM sequence converges to a single critical point. For strongly convex quadratic functions, we demonstrate that the rate at which the error sequence converges to zero can be R-superlinear, regardless of the dimension of variables.