论文标题

通过贝叶斯算法执行,提高了高参数优化的可解释性度量的准确性

Improving Accuracy of Interpretability Measures in Hyperparameter Optimization via Bayesian Algorithm Execution

论文作者

Moosbauer, Julia, Casalicchio, Giuseppe, Lindauer, Marius, Bischl, Bernd

论文摘要

尽管自动超参数优化(HPO)的所有好处,但大多数现代的HPO算法本身都是黑盒子。这使得很难理解导致所选配置,减少对HPO的信任,从而阻碍其广泛采用的决策过程。在这里,我们研究了HPO与可解释的机器学习(IML)方法(例如部分依赖图)的组合。这些技术越来越习惯于解释超参数对黑盒成本函数的边际影响或量化超参数的重要性。但是,如果将这种方法天真地应用于HPO过程的实验数据,则优化器的潜在采样偏差可能会扭曲解释。我们提出了一种修改的HPO方法,该方法有效地平衡了对全局最佳W.R.T.的搜索。预测性能\ emph {and}通过耦合贝叶斯优化和贝叶斯算法执行,对基础黑框函数的IML解释的可靠估计。在神经网络的合成目标和HPO的基准案例上,我们证明我们的方法返回对基础黑盒的更可靠的解释,而不会损失优化性能。

Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which leads to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. These techniques are more and more used to explain the marginal effect of hyperparameters on the black-box cost function or to quantify the importance of hyperparameters. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance \emph{and} the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源