论文标题
学习指导随机搜索
Learning to Guide Random Search
论文作者
论文摘要
我们对高维函数的无衍生化优化感兴趣。现有方法的样本复杂性很高,并且取决于问题维度,这与一阶方法无关的速率不同。深度学习的最新成功表明,许多数据集都在低维流形上,这些歧管可以由深度非线性模型代表。因此,我们考虑对位于潜在的低维歧管上的高维函数的无衍生化优化。我们开发了一种在线学习方法,该方法在执行优化时学习了这种歧视。换句话说,我们共同学习歧管并优化了功能。我们的分析表明,提出的方法显着降低了样本的复杂性。我们从经验上评估了连续优化基准和高维连续控制问题的方法。我们的方法的样品复杂性明显低于增强随机搜索,贝叶斯优化,协方差矩阵适应(CMA-ES)和其他无衍生化的优化算法。
We are interested in derivative-free optimization of high-dimensional functions. The sample complexity of existing methods is high and depends on problem dimensionality, unlike the dimensionality-independent rates of first-order methods. The recent success of deep learning suggests that many datasets lie on low-dimensional manifolds that can be represented by deep nonlinear models. We therefore consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold. We develop an online learning approach that learns this manifold while performing the optimization. In other words, we jointly learn the manifold and optimize the function. Our analysis suggests that the presented method significantly reduces sample complexity. We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems. Our method achieves significantly lower sample complexity than Augmented Random Search, Bayesian optimization, covariance matrix adaptation (CMA-ES), and other derivative-free optimization algorithms.