论文标题

随机梯度下降的Carathéodory抽样

Carathéodory Sampling for Stochastic Gradient Descent

论文作者

Cosentino, Francesco, Oberhauser, Harald, Abate, Alessandro

论文摘要

许多问题需要优化大型数据集的经验风险功能。在每个下降步骤中计算完整梯度的梯度下降方法都不会扩展到此类数据集。随机梯度下降(SGD)的各种口味取代了昂贵的总和,该总和是通过在随机选择的数据集的随机选择的子样本上近似近似值来计算完整梯度的概括,从而,该梯度又有较高的差异。我们提出了一种不同的方法,该方法受到Tchakaloff和Carathéodory的经典结果的启发。这些结果允许用另一种仔细构建的概率度量替换经验度量,该概率措施的支持较小,但可以保留某些统计数据,例如预期的梯度。为了将其转换为可扩展的算法,我们首先会自适应地选择进行降低的下降步骤;其次,我们将其与块坐标下降相结合,以便可以非常便宜地进行测量。这使得最终的方法可扩展到高维空间。最后,我们提供了实验验证和比较。

Many problems require to optimize empirical risk functions over large data sets. Gradient descent methods that calculate the full gradient in every descent step do not scale to such datasets. Various flavours of Stochastic Gradient Descent (SGD) replace the expensive summation that computes the full gradient by approximating it with a small sum over a randomly selected subsample of the data set that in turn suffers from a high variance. We present a different approach that is inspired by classical results of Tchakaloff and Carathéodory about measure reduction. These results allow to replace an empirical measure with another, carefully constructed probability measure that has a much smaller support, but can preserve certain statistics such as the expected gradient. To turn this into scalable algorithms we firstly, adaptively select the descent steps where the measure reduction is carried out; secondly, we combine this with Block Coordinate Descent so that measure reduction can be done very cheaply. This makes the resulting methods scalable to high-dimensional spaces. Finally, we provide an experimental validation and comparison.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源