论文标题

不受约束的在线优化:强烈凸出和平滑问题的动态遗憾分析

Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

论文作者

Chang, Ting-Jui, Shahrampour, Shahin

论文摘要

动态在线学习算法的遗憾通常是根据功能序列($ v_t $)和/或$ t $ rounds之后的最小化序列的路径长度表示的。对于强凸和平滑功能,Zhang等人。建立最小化序列的平方路径长度($ c^*_ {2,t} $)作为后悔的下限。他们还表明,在线梯度下降(OGD)每回合使用多个渐变查询来实现此下限。在本文中,我们专注于不受限制的在线优化。我们首先证明OGD的预处理变体可实现$ O(C^*_ {2,T})$,每回合一个梯度查询。然后,当函数序列的第一阶和二阶信息是可以预测的,我们建议在线乐观的牛顿(OON)方法。 OON的遗憾是通过最小化序列($ C^*_ {4,t} $)的四分之一路径长度捕获的,该路径可能大于$ c^*_ {2,t} $。我们最终表明,通过为OGD使用多个渐变,我们可以在遗憾的是实现$ o(\ min \ {c^*_ {2,t},v_t \})$的上限。

The regret bound of dynamic online learning algorithms is often expressed in terms of the variation in the function sequence ($V_T$) and/or the path-length of the minimizer sequence after $T$ rounds. For strongly convex and smooth functions, , Zhang et al. establish the squared path-length of the minimizer sequence ($C^*_{2,T}$) as a lower bound on regret. They also show that online gradient descent (OGD) achieves this lower bound using multiple gradient queries per round. In this paper, we focus on unconstrained online optimization. We first show that a preconditioned variant of OGD achieves $O(C^*_{2,T})$ with one gradient query per round. We then propose online optimistic Newton (OON) method for the case when the first and second order information of the function sequence is predictable. The regret bound of OON is captured via the quartic path-length of the minimizer sequence ($C^*_{4,T}$), which can be much smaller than $C^*_{2,T}$. We finally show that by using multiple gradients for OGD, we can achieve an upper bound of $O(\min\{C^*_{2,T},V_T\})$ on regret.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源