论文标题

黑盒随机优化的分布式演变策略

Distributed Evolution Strategies for Black-box Stochastic Optimization

论文作者

He, Xiaoyu, Zheng, Zibin, Chen, Chuan, Zhou, Yuren, Luo, Chuan, Lin, Qingwei

论文摘要

这项工作涉及分布式随机黑盒优化的进化方法,在这种方法中,每个工人都可以通过自然启发的算法分别解决问题的近似值。我们提出了一种分布式进化策略(DES)算法,该算法基于对进化策略的适当修改,经典进化算法的家族,以及与现有分布式框架的仔细结合。在平滑和非凸面景观上,DES具有与现有的零阶方法竞争的收敛速率,并且可以利用稀疏性(如果适用)以匹配一阶方法的速率。 DES方法使用高斯概率模型来指导搜索,并避免了现有零阶方法中有限差分技术引起的数值问题。 DES方法也完全适应了问题景观,因为其收敛性可以通过任何参数设置保证。我们进一步提出了两个替代抽样方案,这些方案可显着提高采样效率,同时导致相似的性能。对几个机器学习问题的仿真研究表明,所提出的方法在减少收敛时间和改善参数设置的鲁棒性方面表现出了很大的希望。

This work concerns the evolutionary approaches to distributed stochastic black-box optimization, in which each worker can individually solve an approximation of the problem with nature-inspired algorithms. We propose a distributed evolution strategy (DES) algorithm grounded on a proper modification to evolution strategies, a family of classic evolutionary algorithms, as well as a careful combination with existing distributed frameworks. On smooth and nonconvex landscapes, DES has a convergence rate competitive to existing zeroth-order methods, and can exploit the sparsity, if applicable, to match the rate of first-order methods. The DES method uses a Gaussian probability model to guide the search and avoids the numerical issue resulted from finite-difference techniques in existing zeroth-order methods. The DES method is also fully adaptive to the problem landscape, as its convergence is guaranteed with any parameter setting. We further propose two alternative sampling schemes which significantly improve the sampling efficiency while leading to similar performance. Simulation studies on several machine learning problems suggest that the proposed methods show much promise in reducing the convergence time and improving the robustness to parameter settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源