论文标题

通过在线随机算法从时间依赖的流媒体数据中学习

Learning from time-dependent streaming data with online stochastic algorithms

论文作者

Godichon-Baggioni, Antoine, Werge, Nicklas, Wintenberger, Olivier

论文摘要

本文以时间依赖和偏见的梯度估计值解决流式设置中的随机优化。我们分析了几种一阶方法,包括随机梯度下降(SGD),迷你批次SGD和随时间变化的迷你批次SGD,以及它们的polyak-ruppert平均值。我们的非反应分析建立了新型的启发式方法,可以将依赖性,偏见和凸度水平联系起来,从而可以加速收敛。 Specifically, our findings demonstrate that (i) time-varying mini-batch SGD methods have the capability to break long- and short-range dependence structures, (ii) biased SGD methods can achieve comparable performance to their unbiased counterparts, and (iii) incorporating Polyak-Ruppert averaging can accelerate the convergence of the stochastic optimization algorithms.为了验证我们的理论发现,我们使用模拟和现实生活中的数据进行了一系列实验。

This paper addresses stochastic optimization in a streaming setting with time-dependent and biased gradient estimates. We analyze several first-order methods, including Stochastic Gradient Descent (SGD), mini-batch SGD, and time-varying mini-batch SGD, along with their Polyak-Ruppert averages. Our non-asymptotic analysis establishes novel heuristics that link dependence, biases, and convexity levels, enabling accelerated convergence. Specifically, our findings demonstrate that (i) time-varying mini-batch SGD methods have the capability to break long- and short-range dependence structures, (ii) biased SGD methods can achieve comparable performance to their unbiased counterparts, and (iii) incorporating Polyak-Ruppert averaging can accelerate the convergence of the stochastic optimization algorithms. To validate our theoretical findings, we conduct a series of experiments using both simulated and real-life time-dependent data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源