论文标题

Sym-NCO:利用神经组合优化的对称性

Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization

论文作者

Kim, Minsu, Park, Junyoung, Park, Jinkyoo

论文摘要

基于深度强化学习(DRL)的组合优化(CO)方法(即DRL-NCO)对传统的CO求解器表现出显着优点,因为DRL-NCO能够减少依靠特定问题的专家领域知识(启发式方法)学习CO求解器,并受到了被监督的标记学习方法(受监督的学习方法)。本文提出了一种新型的培训计划Sym-NCO,该方案是一种基于常规的培训方案,该方案利用各种CO问题和解决方案中的普遍对称性。利用对称性(例如旋转和反射不变性)可以极大地提高DRL-NCO的概括能力,因为它允许学习的求解器利用同一CO问题类别中普遍共享的对称性。我们的实验结果验证了我们的SYM-NCO大大提高了四个CO任务中DRL-NCO方法的性能,包括旅行推销员问题(TSP),电容的车辆路由问题(CVRP),奖品收集TSP(PCTSP)(PCTSP)(PCTSP)和Ortimentering问题(OP),而无需利用问题问题特定的专家领域知识。值得注意的是,Sym-NCO的表现不仅优于现有的DRL-NCO方法,而且要以240速度的速度在PCTSP中的竞争性传统求解器,即迭代的本地搜索(ILS)。我们的源代码可在https://github.com/alstn12088/sym-nco上找到。

Deep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers less relying on problem-specific expert domain knowledge (heuristic method) and supervised labeled data (supervised learning method). This paper presents a novel training scheme, Sym-NCO, which is a regularizer-based training scheme that leverages universal symmetricities in various CO problems and solutions. Leveraging symmetricities such as rotational and reflectional invariance can greatly improve the generalization capability of DRL-NCO because it allows the learned solver to exploit the commonly shared symmetricities in the same CO problem class. Our experimental results verify that our Sym-NCO greatly improves the performance of DRL-NCO methods in four CO tasks, including the traveling salesman problem (TSP), capacitated vehicle routing problem (CVRP), prize collecting TSP (PCTSP), and orienteering problem (OP), without utilizing problem-specific expert domain knowledge. Remarkably, Sym-NCO outperformed not only the existing DRL-NCO methods but also a competitive conventional solver, the iterative local search (ILS), in PCTSP at 240 faster speed. Our source code is available at https://github.com/alstn12088/Sym-NCO.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源