论文标题

利用实例和可变相似性,以改善学习增强的分支

Exploiting Instance and Variable Similarity to Improve Learning-Enhanced Branching

论文作者

Gu, Xiaoyi, Dey, Santanu S., Xavier, Álinson S., Qiu, Feng

论文摘要

在许多操作应用程序中,有必要在非常有限的时间窗口中常规地找到挑战混合构成线性编程(MILP)问题的好解决方案。一个例子是安全受限的单位承诺(SCUC)问题,每天解决以清除日期的电力市场。先前的研究表明,机器学习(ML)方法可以产生高质量的启发式解决方案来组合问题,但是即使使用最近提供的学习增强的分支方法,这些解决方案的最佳性仍然可能是耗时的。在本文中,我们提出了一个简单的修改,以基于关键观察结果,即在此类操作应用中,实例彼此显着相似,以提高学习增强的分支方法的性能。具体而言,实例通常共享相同的大小和问题结构,仅在矩阵系数,右侧和客观功能上略有差异。另外,给定实例中的某些变量组通常也彼此相似。因此,与文献中以前的作品预测了使用单个ML模型预测所有分支分数的作品不同,我们根据其相似性提出了每个变量或每个变量组的单独培训ML模型。我们评估了对现实的大规模SCUC实例的增强,并且比以前具有相同数量的培训数据的作品获得了差距的明显更好。

In many operational applications, it is necessary to routinely find, within a very limited time window, provably good solutions to challenging mixed-integer linear programming (MILP) problems. An example is the Security-Constrained Unit Commitment (SCUC) problem, solved daily to clear the day-ahead electricity markets. Previous research demonstrated that machine learning (ML) methods can produce high-quality heuristic solutions to combinatorial problems, but proving the optimality of these solutions, even with recently-proposed learning-enhanced branching methods, can still be time-consuming. In this paper, we propose a simple modification to improve the performance of learning-enhanced branching methods based on the key observation that, in such operational applications, instances are significantly similar to each other. Specifically, instances typically share the same size and problem structure, with slight differences only on matrix coefficients, right-hand sides and objective function. In addition, certain groups of variables within a given instance are also typically similar to each other. Therefore, unlike previous works in the literature which predicted all branching scores with a single ML model, we propose training separate ML models per variable or per groups of variables, based on their similarity. We evaluate this enhancement on realistic large-scale SCUC instances and we obtain significantly better gap closures than previous works with the same amount of training data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源