论文标题
自适应本地贝叶斯对多个离散变量的优化
Adaptive Local Bayesian Optimization Over Multiple Discrete Variables
论文作者
论文摘要
在机器学习算法中,超参数的选择通常是一种艺术,而不是科学,需要具有专家经验的劳动密集型搜索。因此,对超参数优化的自动化以排除人干预是一个很大的吸引力,尤其是对于黑盒功能。最近,尽管依赖于任务的问题并不容易解决,但对解决此类隐蔽任务进行更好概括的需求越来越多。 Black-Box优化挑战(Neurips 2020)要求竞争对手在标准机器学习问题的不同领域构建强大的黑盒优化器。本文以逐步的方式描述了Kaist OSI团队的方法,该方法的表现优于基线算法高达 +20.39%。我们首先在区域可靠性的概念下加强了当地的贝叶斯搜索。然后,我们为高斯工艺内核设计一个组合核。同样,我们结合了贝叶斯和多臂匪徒的方法,即(MAB)方法,以考虑变量类型的考虑选择值;真实的整数变量与贝叶斯人有关,而布尔和分类变量则与mab一起使用。经验评估表明,我们的方法的表现优于不同任务的现有方法。
In the machine learning algorithms, the choice of the hyperparameter is often an art more than a science, requiring labor-intensive search with expert experience. Therefore, automation on hyperparameter optimization to exclude human intervention is a great appeal, especially for the black-box functions. Recently, there have been increasing demands of solving such concealed tasks for better generalization, though the task-dependent issue is not easy to solve. The Black-Box Optimization challenge (NeurIPS 2020) required competitors to build a robust black-box optimizer across different domains of standard machine learning problems. This paper describes the approach of team KAIST OSI in a step-wise manner, which outperforms the baseline algorithms by up to +20.39%. We first strengthen the local Bayesian search under the concept of region reliability. Then, we design a combinatorial kernel for a Gaussian process kernel. In a similar vein, we combine the methodology of Bayesian and multi-armed bandit,(MAB) approach to select the values with the consideration of the variable types; the real and integer variables are with Bayesian, while the boolean and categorical variables are with MAB. Empirical evaluations demonstrate that our method outperforms the existing methods across different tasks.