论文标题
对广义标志的无界平滑度的鲁棒性
Robustness to Unbounded Smoothness of Generalized SignSGD
论文作者
论文摘要
非凸优化的传统分析通常取决于平滑度的假设,即要求梯度为Lipschitz。但是,最近的证据表明,这种平滑度条件并未捕获一些深度学习目标功能的特性,包括涉及复发性神经网络和LSTM的函数。取而代之的是,他们满足了更轻松的状况,并具有潜在的平滑度。在这一轻松的假设下,从理论上和经验上表明,倾斜的SGD比香草具有优势。在本文中,我们表明,在应对这种情况方面,剪辑对于ADAM型算法不可或缺:我们从理论上证明,广义标志GD算法可以获得与带有剪裁的SGD相似的收敛速率,但根本不需要明确的剪辑。一端的这个算法家族恢复了符号,另一端与受欢迎的亚当算法非常相似。我们的分析强调了动量在分析符号型和Adam型算法中发挥作用的关键作用:它不仅降低了噪声的影响,因此消除了对先前大型迷你批次的需求,在先前的符号分析中,它的符号分析,而且还可以实质上降低了平稳性和渐变性和渐变性的效果。我们还将这些算法与流行的优化器进行了比较,在一系列深度学习任务上,观察到我们可以在击败其他人的同时匹配亚当的性能。
Traditional analyses in non-convex optimization typically rely on the smoothness assumption, namely requiring the gradients to be Lipschitz. However, recent evidence shows that this smoothness condition does not capture the properties of some deep learning objective functions, including the ones involving Recurrent Neural Networks and LSTMs. Instead, they satisfy a much more relaxed condition, with potentially unbounded smoothness. Under this relaxed assumption, it has been theoretically and empirically shown that the gradient-clipped SGD has an advantage over the vanilla one. In this paper, we show that clipping is not indispensable for Adam-type algorithms in tackling such scenarios: we theoretically prove that a generalized SignSGD algorithm can obtain similar convergence rates as SGD with clipping but does not need explicit clipping at all. This family of algorithms on one end recovers SignSGD and on the other end closely resembles the popular Adam algorithm. Our analysis underlines the critical role that momentum plays in analyzing SignSGD-type and Adam-type algorithms: it not only reduces the effects of noise, thus removing the need for large mini-batch in previous analyses of SignSGD-type algorithms, but it also substantially reduces the effects of unbounded smoothness and gradient norms. We also compare these algorithms with popular optimizers on a set of deep learning tasks, observing that we can match the performance of Adam while beating the others.