论文标题

镜下降方法的收敛速率界限:IQC和Bregman Divergence

Convergence Rate Bounds for the Mirror Descent Method: IQCs and the Bregman Divergence

论文作者

Li, Mengmou, Laib, Khaled, Lestas, Ioannis

论文摘要

本文涉及镜下下降(MD)方法的收敛分析,这是一种著名的凸优化算法。通过整体二次约束(IQC)构建了一个分析框架,以分析MD方法的收敛速率,并在连续时间和离散时间内都具有强烈的凸目标函数。我们提出了发现MD算法的收敛速率为两种方案中线性基质不等式(LMI)的可行性问题的问题。特别是,在连续的时间内,我们表明,当将Lyapunov函数与Popov标准相关的一类特殊情况下,Bregman的分歧函数通常用作该算法的Lyapunov函数,而后者则应用于适当的问题的适当重新重新制定。因此,将POPOV标准及其与其他IQC的组合应用在一起,可以导致融合率范围随着保守主义的降低。我们还通过示例说明,得出的收敛速率边界可能很紧。

This paper is concerned with convergence analysis for the mirror descent (MD) method, a well-known algorithm in convex optimization. An analysis framework via integral quadratic constraints (IQCs) is constructed to analyze the convergence rate of the MD method with strongly convex objective functions in both continuous-time and discrete-time. We formulate the problem of finding convergence rates of the MD algorithms into feasibility problems of linear matrix inequalities (LMIs) in both schemes. In particular, in continuous-time, we show that the Bregman divergence function, which is commonly used as a Lyapunov function for this algorithm, is a special case of the class of Lyapunov functions associated with the Popov criterion, when the latter is applied to an appropriate reformulation of the problem. Thus, applying the Popov criterion and its combination with other IQCs, can lead to convergence rate bounds with reduced conservatism. We also illustrate via examples that the convergence rate bounds derived can be tight.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源