论文标题

一阶经验风险最小化的均匀稳定性

Uniform Stability for First-Order Empirical Risk Minimization

论文作者

Attia, Amit, Koren, Tomer

论文摘要

我们考虑设计统一稳定的一阶优化算法的问题,以最小化经验风险。统一的稳定性通常用于获得优化算法的概括误差范围,我们对实现它的一般方法感兴趣。对于欧几里得的几何形状,我们建议一个黑盒转换,鉴于平滑的优化算法,它会产生算法的均匀稳定版本,同时将其收敛速率保持在对数因素上。使用此减少,我们获得了一种(几乎)最佳算法,以平滑优化,并通过收敛速率$ \ widetilde {o}(1/t^2)$和统一的稳定性$ O(t^2/n)$,解决了Chen等人的开放问题。 (2018);阿蒂亚和科伦(2021)。对于更一般的几何形状,我们开发了一种镜下降的变体,以使优化平滑率$ \ wideTilde {o}(1/t)$和统一的稳定性$ O(T/N)$(T/N)$,从而打开了像欧几里得案一样设计一般转换方法的问题。

We consider the problem of designing uniformly stable first-order optimization algorithms for empirical risk minimization. Uniform stability is often used to obtain generalization error bounds for optimization algorithms, and we are interested in a general approach to achieve it. For Euclidean geometry, we suggest a black-box conversion which given a smooth optimization algorithm, produces a uniformly stable version of the algorithm while maintaining its convergence rate up to logarithmic factors. Using this reduction we obtain a (nearly) optimal algorithm for smooth optimization with convergence rate $\widetilde{O}(1/T^2)$ and uniform stability $O(T^2/n)$, resolving an open problem of Chen et al. (2018); Attia and Koren (2021). For more general geometries, we develop a variant of Mirror Descent for smooth optimization with convergence rate $\widetilde{O}(1/T)$ and uniform stability $O(T/n)$, leaving open the question of devising a general conversion method as in the Euclidean case.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源