论文标题

吉布斯后期温度参数的实际校准

Practical calibration of the temperature parameter in Gibbs posteriors

论文作者

Perrotta, Lucie

论文摘要

Pac-bayesian算法和Gibbs后代由于贝叶斯推断不一致即使对模型错误指定的稳健性,因此越来越受欢迎。 Pac-bayesianα-形成剂是标准贝叶斯后部的概括,可以用参数alpha恢复以应对不一致。已经提出了用于调谐α的数据驱动方法,但仍然很少,并且通常在计算上很重。此外,在使用变异近似而不是精确的α-posterer的情况下,这些方法的充分性尚不清楚。这范围缩小了对简单模型的用法,并阻止了他们在大规模问题中的应用。因此,我们需要快速的方法来调整与精确和变异α-posterors一起使用的α。首先,我们分别基于样本分解和引导分别提出了两种数据驱动的方法来调整alpha。其次,我们制定了三个流行统计模型的(精确或变异)后期(精确或变化),并将其修改为alpha-posterer。对于每种型号,我们都会测试我们的策略,并将其与标准贝叶斯和Grunwald的Safebayes进行比较。尽管引导取得了混合的结果,但样品分割和SafeBayes在我们描述的精确和变化的α-寄生虫上表现良好,并且比误指定或复杂模型中的标准贝叶斯取得更好的结果。此外,样本拆分在速度方面的表现优于Saftbayes。样品分解提供了一种快速简便的解决方案,以使其不一致,并且通常比贝叶斯推断类似或更好。我们的结果提供了有关Pac-Bayesian和Gibbs后期α校准的提示,并可能有助于在大型和复杂的模型中使用这些方法。

PAC-Bayesian algorithms and Gibbs posteriors are gaining popularity due to their robustness against model misspecification even when Bayesian inference is inconsistent. The PAC-Bayesian alpha-posterior is a generalization of the standard Bayes posterior which can be tempered with a parameter alpha to handle inconsistency. Data driven methods for tuning alpha have been proposed but are still few, and are often computationally heavy. Additionally, the adequacy of these methods in cases where we use variational approximations instead of exact alpha-posteriors is not clear. This narrows their usage to simple models and prevents their application to large-scale problems. We hence need fast methods to tune alpha that work with both exact and variational alpha-posteriors. First, we propose two data driven methods for tuning alpha, based on sample-splitting and bootstrapping respectively. Second, we formulate the (exact or variational) posteriors of three popular statistical models, and modify them into alpha-posteriors. For each model, we test our strategies and compare them with standard Bayes and Grunwald's SafeBayes. While bootstrapping achieves mixed results, sample-splitting and SafeBayes perform well on the exact and variational alpha-posteriors we describe, and achieve better results than standard Bayes in misspecified or complex models. Additionally, sample-splitting outperforms SafeBayes in terms of speed. Sample-splitting offers a fast and easy solution to inconsistency and typically performs similarly or better than Bayesian inference. Our results provide hints on the calibration of alpha in PAC-Bayesian and Gibbs posteriors, and may facilitate using these methods in large and complex models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源