论文标题
乐观的镜子下降会收敛到纳什或强烈的粗相关平衡。
Optimistic Mirror Descent Either Converges to Nash or to Strong Coarse Correlated Equilibria in Bimatrix Games
论文作者
论文摘要
我们表明,对于任何足够小的固定$ε> 0 $,当两位一般两种玩家(bimatrix)中的两个玩家都采用光滑正则化的乐观镜下降(OMD),学习率$η= oη= o(ε^2)$和$ t =ω(\ text {\ text {poly}(poly}(poly}(poly}(poly}(1/ε),$ recremande) (NE),或者平均相关分布是$ω(\ text {poly}(ε))$ - 强的粗糙相关平衡(CCE):任何可能的单方面偏差不仅会使播放器变得更糟,而且会使其效用降低$ω(\ text {poly}(ε))$。直接的结果,当OMD的迭代从Bimatrix游戏中成为Nash Equilibria有限时,我们保证仅在$ O(1)$迭代后与精确的CCE收敛。我们的结果表明,未耦合的无regret学习算法可以在通用游戏中收敛到CCE,而不是零和零游戏。为了确定这一点,我们表明,当OMD任意接近NE时,两个参与者的(累积)遗憾不仅是负面的,而且随时间线性衰减。鉴于遗憾是在线学习中表现的规范衡量标准,我们的结果表明,游戏中无regret学习算法的骑自行车行为可以从效率上证明是合理的。
We show that, for any sufficiently small fixed $ε> 0$, when both players in a general-sum two-player (bimatrix) game employ optimistic mirror descent (OMD) with smooth regularization, learning rate $η= O(ε^2)$ and $T = Ω(\text{poly}(1/ε))$ repetitions, either the dynamics reach an $ε$-approximate Nash equilibrium (NE), or the average correlated distribution of play is an $Ω(\text{poly}(ε))$-strong coarse correlated equilibrium (CCE): any possible unilateral deviation does not only leave the player worse, but will decrease its utility by $Ω(\text{poly}(ε))$. As an immediate consequence, when the iterates of OMD are bounded away from being Nash equilibria in a bimatrix game, we guarantee convergence to an exact CCE after only $O(1)$ iterations. Our results reveal that uncoupled no-regret learning algorithms can converge to CCE in general-sum games remarkably faster than to NE in, for example, zero-sum games. To establish this, we show that when OMD does not reach arbitrarily close to a NE, the (cumulative) regret of both players is not only negative, but decays linearly with time. Given that regret is the canonical measure of performance in online learning, our results suggest that cycling behavior of no-regret learning algorithms in games can be justified in terms of efficiency.