论文标题
利用元启发式搜索中的本地最优性
Exploiting Local Optimality in Metaheuristic Search
论文作者
论文摘要
已经提出了各种策略,以克服元神经搜索中的本地最优性。本文研究了可以利用的移动特征,以做出良好的决策,这些步骤远离本地最佳距离,然后朝着新的本地最佳距离迈进。我们介绍了策略,以使用自适应记忆元神经术来识别和利用解决方案历史记录的有用特征,以提供选择动作的规则,这些举动为发现改进的本地Optima提供了希望。 我们的方法基于一种称为指数外推的结构,使用一种新型的自适应内存。内存通过阈值不等式运行,确保选定的移动不会导致指定数量的最近遇到的本地Optima。相关的阈值体现在选择规则策略中,以进一步利用指数外推概念。这些共同产生了基于阈值的交替上升(AA)算法,从而为探索开辟了各种研究可能性。
A variety of strategies have been proposed for overcoming local optimality in metaheuristic search. This paper examines characteristics of moves that can be exploited to make good decisions about steps that lead away from a local optimum and then lead toward a new local optimum. We introduce strategies to identify and take advantage of useful features of solution history with an adaptive memory metaheuristic, to provide rules for selecting moves that offer promise for discovering improved local optima. Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation. The memory operates by means of threshold inequalities that ensure selected moves will not lead to a specified number of most recently encountered local optima. Associated thresholds are embodied in choice rule strategies that further exploit the exponential extrapolation concept. Together these produce a threshold based Alternating Ascent (AA) algorithm that opens a variety of research possibilities for exploration.