论文标题
多目标近端梯度法的收敛速率分析
Convergence rates analysis of a multiobjective proximal gradient method
论文作者
论文摘要
在过去的二十年中,已经开发了许多用于多目标优化的下降算法。 Tanabe等。 (Comput Optim Appl 72(2):339--361,2019)提出了一种用于多目标优化的近端梯度方法,该方法可以解决多目标问题,其目标函数是连续可区分的函数和封闭,正确,正确和convex的总和。在合理的假设下,众所周知,该方法生成的序列的积累点是帕累托固定的。但是,该论文尚未确定收敛速率。在这里,我们显示了多目标近端梯度方法的全局收敛速率,与标量优化中已知的相匹配。更具体地说,通过使用优异函数来测量复杂性,我们介绍了非convex($ o(\ sqrt {1 / k})$)的收敛速率,convex($ o(1 / k)$),并且强烈convex($ o(r^k)对于某些$ r \ in(0,1,1)$)问题。我们还扩展了用于多目标优化的所谓polyak-lojasiewicz(PL)不平等,并为满足此类不平等的多目标问题的线性收敛速率($ o(r^k)$ in(0,1,1)$)。
Many descent algorithms for multiobjective optimization have been developed in the last two decades. Tanabe et al. (Comput Optim Appl 72(2):339--361, 2019) proposed a proximal gradient method for multiobjective optimization, which can solve multiobjective problems, whose objective function is the sum of a continuously differentiable function and a closed, proper, and convex one. Under reasonable assumptions, it is known that the accumulation points of the sequences generated by this method are Pareto stationary. However, the convergence rates were not established in that paper. Here, we show global convergence rates for the multiobjective proximal gradient method, matching what is known in scalar optimization. More specifically, by using merit functions to measure the complexity, we present the convergence rates for non-convex ($O(\sqrt{1 / k})$), convex ($O(1 / k)$), and strongly convex ($O(r^k)$ for some $r \in (0, 1)$) problems. We also extend the so-called Polyak-Łojasiewicz (PL) inequality for multiobjective optimization and establish the linear convergence rate for multiobjective problems that satisfy such inequalities ($O(r^k)$ for some $r \in (0, 1)$).