论文标题

希尔伯特空间中随机近端迭代法的子线性收敛性

Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

论文作者

Eisenmann, Monika, Stillfjord, Tony, Williamson, Måns

论文摘要

我们考虑近端算法的随机版本,用于在希尔伯特空间上提出的优化问题。这是监督学习的典型应用。尽管该方法不是新方法,但尚未以这种形式进行广泛的分析。实际上,大多数相关的结果仅限于有限维设置,在该设置中,误差界限可能取决于空间的维度。另一方面,由于对问题的假设较弱,无限维设置中的几个现有结果仅证明是非常弱的收敛类型。特别是,没有结果表明汇率收敛。在本文中,我们通过假设更多的优化问题的规律性来弥合这两个世界,这使我们能够在无限二维设置中证明(最佳)亚线性率。 特别是,我们假设目标函数是凸位可区分函数家族的期望值。尽管我们要求完整的目标函数强烈凸出,但我们并不认为其组成部分是如此。此外,我们要求梯度满足弱本地Lipschitz的连续性特性,在该特性中,Lipschitz常数可以在多个方差的一定保证和最小值接近最小值的方差和更高的力矩上生长。 我们通过离散具有不同准确性的混凝土无限分类分类问题来说明这些结果。

We consider a stochastic version of the proximal point algorithm for optimization problems posed on a Hilbert space. A typical application of this is supervised learning. While the method is not new, it has not been extensively analyzed in this form. Indeed, most related results are confined to the finite-dimensional setting, where error bounds could depend on the dimension of the space. On the other hand, the few existing results in the infinite-dimensional setting only prove very weak types of convergence, owing to weak assumptions on the problem. In particular, there are no results that show convergence with a rate. In this article, we bridge these two worlds by assuming more regularity of the optimization problem, which allows us to prove convergence with an (optimal) sub-linear rate also in an infinite-dimensional setting. In particular, we assume that the objective function is the expected value of a family of convex differentiable functions. While we require that the full objective function is strongly convex, we do not assume that its constituent parts are so. Further, we require that the gradient satisfies a weak local Lipschitz continuity property, where the Lipschitz constant may grow polynomially given certain guarantees on the variance and higher moments near the minimum. We illustrate these results by discretizing a concrete infinite-dimensional classification problem with varying degrees of accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源