论文标题
学习通过硬线性约束解决优化问题
Learning to Solve Optimization Problems with Hard Linear Constraints
论文作者
论文摘要
受限的优化问题出现在各种具有挑战性的现实问题中,在这些问题中,约束通常捕获基础系统的物理。解决这些问题的经典方法依赖于探索可行域中寻找最佳解决方案的迭代算法。这些迭代方法通常是决策中的计算瓶颈,并且会对时间敏感的应用产生不利影响。最近,神经近似值显示了有望作为迭代求解器的替代者,可以在单个进料前输出最佳解决方案,从而为优化问题提供快速解决方案。但是,通过神经网络执行限制仍然是一个开放的挑战。本文开发了一种神经近似值,该神经近似值将输入映射到具有硬性线性约束的优化问题,即几乎是最佳的可行解决方案。我们提出的方法由四个主要步骤组成:1)将原始问题减少到一组自变量上的优化,2)找到一个量规函数,将量规函数映射到可行的问题的可行问题集中,3)学习一个神经近似器,学习一个神经近似值,该神经近似值将优化的输入映射到原来的变量,从而恢复了型号的质量,并恢复了一个质量,并恢复了一个值,并恢复了一个值得的质量,并恢复了一个值得的质量,并恢复了一个值得的质量。 问题。我们可以通过这一步骤来保证可行性。与当前的学习辅助解决方案不同,我们的方法不含参数调整,并且完全删除了迭代。我们在最佳功率调度(对我们的电网的弹性至关重要)和在图像注册问题的背景下进行了约束的非凸优化的背景下,在二次编程中表明了我们提出的方法的性能。
Constrained optimization problems appear in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems rely on iterative algorithms that explore the feasible domain in the search for the best solution. These iterative methods are often the computational bottleneck in decision-making and adversely impact time-sensitive applications. Recently, neural approximators have shown promise as a replacement for the iterative solvers that can output the optimal solution in a single feed-forward providing rapid solutions to optimization problems. However, enforcing constraints through neural networks remains an open challenge. This paper develops a neural approximator that maps the inputs to an optimization problem with hard linear constraints to a feasible solution that is nearly optimal. Our proposed approach consists of four main steps: 1) reducing the original problem to optimization on a set of independent variables, 2) finding a gauge function that maps the infty-norm unit ball to the feasible set of the reduced problem, 3)learning a neural approximator that maps the optimization's inputs to an optimal point in the infty-norm unit ball, and 4) find the values of the dependent variables from the independent variable and recover the solution to the original problem. We can guarantee hard feasibility through this sequence of steps. Unlike the current learning-assisted solutions, our method is free of parameter-tuning and removes iterations altogether. We demonstrate the performance of our proposed method in quadratic programming in the context of the optimal power dispatch (critical to the resiliency of our electric grid) and a constrained non-convex optimization in the context of image registration problems.