论文标题

内部布雷格曼梯度方法的收敛率分析矢量优化问题

Convergence rates analysis of Interior Bregman Gradient Method for Vector Optimization Problems

论文作者

Chen, Jian, Tang, Liping, Yang, Xinmin

论文摘要

近年来,通过使用Bregman距离,Lipschitz梯度的连续性和强凸度被取消,并用相对平滑度和相对强的凸度取代。在温和的假设下,证明Bregman规则性的梯度方法线性收敛,以解决单目标优化问题(SOP)。在本文中,我们将相对平滑度和相对强的凸度扩展到矢量值函数,并分析Bregman内部梯度方法的融合,以进行矢量优化问题(VOPS)。具体而言,全局收敛速率为$ \ MATHCAL {O}(\ frac {1} {k})$和$ \ Mathcal {o}(r^{k})(r^{k})(0 <r <1)$,分别用于cONVEX和相对强烈的convex vops。此外,提出的方法对于满足矢量bregman-pl不平等的VOP进行线性收敛。

In recent years, by using Bregman distance, the Lipschitz gradient continuity and strong convexity were lifted and replaced by relative smoothness and relative strong convexity. Under the mild assumptions, it was proved that gradient methods with Bregman regularity converge linearly for single-objective optimization problems (SOPs). In this paper, we extend the relative smoothness and relative strong convexity to vector-valued functions and analyze the convergence of an interior Bregman gradient method for vector optimization problems (VOPs). Specifically, the global convergence rates are $\mathcal{O}(\frac{1}{k})$ and $\mathcal{O}(r^{k})(0<r<1)$ for convex and relative strongly convex VOPs, respectively. Moreover, the proposed method converges linearly for VOPs that satisfy a vector Bregman-PL inequality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源