论文标题
RKHS中平滑函数的多尺度零级优化
Multi-Scale Zero-Order Optimization of Smooth Functions in an RKHS
论文作者
论文摘要
我们的目标是在假设$ f $是hölder平稳的,并且在与给定核$ k $相关的RKHS中具有界限的假设,我们旨在优化一个Black-box函数$ f:\ mathcal {x} \ mapsto \ mathbb {r} $。众所周知,此问题具有不可知论的高斯工艺(GP)匪徒解释,其中使用内核$ K $适当构建的GP替代模型用于获得上置信度结合(UCB)算法。在本文中,我们提出了一种新的算法(\ texttt {lp-gp-ucb}),其中通常使用局部多项式(LP)估计器增强了hölder平滑功能$ f $的局部多项式估计器$ f $,以构建多层尺度的UCB UCB指导搜索优化器。我们分析了该算法,并在其简单和累积的遗憾中得出了很高的概率界限。然后,我们证明许多常见的RKH的元素是Hölder平滑的,并获得了相应的Hölder平滑度参数,因此,将我们的遗憾界限用于几种常用的核心。当专门针对平方指数(SE)内核时,\ texttt {lp-gp-ucb}匹配最佳性能,而对于MatérnKernels$(k_ν)_ {ν> 0} $的情况,它会导致均匀的更严格的遗憾范围,以实现平稳性参数的所有值$ n $ n $ n $ c> 0 $ c> 0 $ c> 0 $ c> 0 $ c> 0 $ c。最值得注意的是,对于某些$ν$的范围,该算法在简单和累积的后悔方面达到了近乎最佳的界限,与算法无关的下限与polylog因子相匹配,从而弥补了现有的上层和下限之间的较大差距,以达到这些值的这些值。此外,我们的分析提供了第一个明确的遗憾界限,从预算$ n $方面,对于理性 - 季度(RQ)和Gamma-oppentential(GE)提供了第一个遗憾的界限。最后,使用合成功能以及CNN超参数调整任务的实验证明了我们多尺度分区方法的实际好处,而不是某些现有算法的数值。
We aim to optimize a black-box function $f:\mathcal{X} \mapsto \mathbb{R}$ under the assumption that $f$ is Hölder smooth and has bounded norm in the RKHS associated with a given kernel $K$. This problem is known to have an agnostic Gaussian Process (GP) bandit interpretation in which an appropriately constructed GP surrogate model with kernel $K$ is used to obtain an upper confidence bound (UCB) algorithm. In this paper, we propose a new algorithm (\texttt{LP-GP-UCB}) where the usual GP surrogate model is augmented with Local Polynomial (LP) estimators of the Hölder smooth function $f$ to construct a multi-scale UCB guiding the search for the optimizer. We analyze this algorithm and derive high probability bounds on its simple and cumulative regret. We then prove that the elements of many common RKHS are Hölder smooth and obtain the corresponding Hölder smoothness parameters, and hence, specialize our regret bounds for several commonly used kernels. When specialized to the Squared Exponential (SE) kernel, \texttt{LP-GP-UCB} matches the optimal performance, while for the case of Matérn kernels $(K_ν)_{ν>0}$, it results in uniformly tighter regret bounds for all values of the smoothness parameter $ν>0$. Most notably, for certain ranges of $ν$, the algorithm achieves near-optimal bounds on simple and cumulative regrets, matching the algorithm-independent lower bounds up to polylog factors, and thus closing the large gap between the existing upper and lower bounds for these values of $ν$. Additionally, our analysis provides the first explicit regret bounds, in terms of the budget $n$, for the Rational-Quadratic (RQ) and Gamma-Exponential (GE). Finally, experiments with synthetic functions as well as a CNN hyperparameter tuning task demonstrate the practical benefits of our multi-scale partitioning approach over some existing algorithms numerically.