论文标题
通过神经修复对稀疏图的优化速度更快
Faster Optimization on Sparse Graphs via Neural Reparametrization
论文作者
论文摘要
在数学优化中,二阶牛顿的方法通常比一阶方法更快地收敛,但是它们需要黑森的倒数,因此计算上的昂贵。但是,我们发现,在稀疏图上,图神经网络(GNN)可以实现一种有效的准Newton方法,该方法可以加快优化的速度10-100x。我们的方法“神经修复”将优化参数变化为GNN的输出,以重塑优化景观。 GNN使用预先计算的Hessian作为传播规则,可以有效地利用二阶信息,从而达到与自适应梯度方法相似的效果。当我们的方法通过架构设计解决优化时,可以与Adam和Rmsprop等任何优化器结合使用。我们展示了我们的方法在科学相关问题上的应用,包括热扩散,同步和持续的同源性。
In mathematical optimization, second-order Newton's methods generally converge faster than first-order methods, but they require the inverse of the Hessian, hence are computationally expensive. However, we discover that on sparse graphs, graph neural networks (GNN) can implement an efficient Quasi-Newton method that can speed up optimization by a factor of 10-100x. Our method, neural reparametrization, modifies the optimization parameters as the output of a GNN to reshape the optimization landscape. Using a precomputed Hessian as the propagation rule, the GNN can effectively utilize the second-order information, reaching a similar effect as adaptive gradient methods. As our method solves optimization through architecture design, it can be used in conjunction with any optimizers such as Adam and RMSProp. We show the application of our method on scientifically relevant problems including heat diffusion, synchronization and persistent homology.