论文标题
使用基于轨迹的功能,每次运行算法选择,并进行热启动
Per-run Algorithm Selection with Warm-starting using Trajectory-based Features
论文作者
论文摘要
每种算法选择旨在为给定的问题实例和给定的性能标准推荐一种或几种合适的算法,这些算法有望在特定设置中表现良好。选择是在脱机上进行的,使用有关问题实例或在专用特征提取步骤中从实例中提取的功能的公开可用信息。这忽略了算法在优化过程中积累的有价值的信息。 在这项工作中,我们提出了一种替代性的在线算法选择方案,我们每笔算法选择该方案。在我们的方法中,我们使用默认算法启动优化,在经过一定数量的迭代之后,从该初始优化器的观察到的轨迹中提取实例功能,以确定是否切换到另一个优化器。我们使用CMA-E作为默认求解器测试这种方法,以及六个不同优化器的投资组合作为可切换的潜在算法。与其他关于在线人均算法选择的最新工作相反,我们使用在第一个优化阶段累积的信息进行了第二个优化器。我们表明,我们的方法的表现优于静态算法选择。我们还基于探索性景观分析和分别对CMA-E的内部状态变量的探索性景观分析和时间序列分析比较了两种不同的特征提取原理。我们表明,这两种功能集的组合为我们的测试用例提供了最准确的建议,该建议是从可可平台的BBOB功能套件和Nevergrad平台的Yabbob Suite取自的。
Per-instance algorithm selection seeks to recommend, for a given problem instance and a given performance criterion, one or several suitable algorithms that are expected to perform well for the particular setting. The selection is classically done offline, using openly available information about the problem instance or features that are extracted from the instance during a dedicated feature extraction step. This ignores valuable information that the algorithms accumulate during the optimization process. In this work, we propose an alternative, online algorithm selection scheme which we coin per-run algorithm selection. In our approach, we start the optimization with a default algorithm, and, after a certain number of iterations, extract instance features from the observed trajectory of this initial optimizer to determine whether to switch to another optimizer. We test this approach using the CMA-ES as the default solver, and a portfolio of six different optimizers as potential algorithms to switch to. In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase. We show that our approach outperforms static per-instance algorithm selection. We also compare two different feature extraction principles, based on exploratory landscape analysis and time series analysis of the internal state variables of the CMA-ES, respectively. We show that a combination of both feature sets provides the most accurate recommendations for our test cases, taken from the BBOB function suite from the COCO platform and the YABBOB suite from the Nevergrad platform.