论文标题

数据驱动的镜子下降带有输入 - 传感器神经网络

Data-Driven Mirror Descent with Input-Convex Neural Networks

论文作者

Tan, Hong Ye, Mukherjee, Subhadip, Tang, Junqi, Schönlieb, Carola-Bibiane

论文摘要

学习到优化是一个新兴框架,旨在通过利用培训数据来加快某些优化问题的解决方案。在收敛速度方面,学到的优化求解器已显示出优于经典优化算法,尤其是对于凸问题。许多现有数据驱动的优化方法基于参数化更新步骤并从可用数据中学习最佳参数(通常是标量)。我们提出了一种基于经典镜下降(MD)算法的学习凸优化求解器的新型功能参数化方法。具体而言,我们试图通过使用输入 - 控制神经网络(ICNN)对基础凸功能建模MD中的最佳Bregman距离。通过在预定数量的迭代次数之后,在MD迭代中评估的目标目标函数来学习ICNN的参数。镜像的倒数是使用另一个神经网络大致建模的,因为确切的逆向计算很棘手。我们通过近似反射镜图来得出所提出的学习镜下降(LMD)方法的收敛速率边界,并对各种凸问题进行广泛的数值评估,例如图像插入,denoing,DeNosing,学习两类支持矢量机(SVM)分类器和多级级别的线性分类器上的固定功能。

Learning-to-optimize is an emerging framework that seeks to speed up the solution of certain optimization problems by leveraging training data. Learned optimization solvers have been shown to outperform classical optimization algorithms in terms of convergence speed, especially for convex problems. Many existing data-driven optimization methods are based on parameterizing the update step and learning the optimal parameters (typically scalars) from the available data. We propose a novel functional parameterization approach for learned convex optimization solvers based on the classical mirror descent (MD) algorithm. Specifically, we seek to learn the optimal Bregman distance in MD by modeling the underlying convex function using an input-convex neural network (ICNN). The parameters of the ICNN are learned by minimizing the target objective function evaluated at the MD iterate after a predetermined number of iterations. The inverse of the mirror map is modeled approximately using another neural network, as the exact inverse is intractable to compute. We derive convergence rate bounds for the proposed learned mirror descent (LMD) approach with an approximate inverse mirror map and perform extensive numerical evaluation on various convex problems such as image inpainting, denoising, learning a two-class support vector machine (SVM) classifier and a multi-class linear classifier on fixed features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源