论文标题

黑盒核心变量推理

Black-box Coreset Variational Inference

论文作者

Manousakas, Dionysis, Ritter, Hippolyt, Karaletsos, Theofanis

论文摘要

核心方法的最新进展表明,一系列代表性数据点可以替代贝叶斯推理的大量数据,从而保留相关的统计信息,并显着加速了随后的下游任务。现有的变分核构建体依赖于选择观察到的数据点的子集,或者在观察到的空间中共同执行近似推理并优化伪dat,类似于高斯过程中诱导点方法。到目前为止,两种方法都受到评估其目标的复杂性的限制,并且需要在整个推理和测试过程中从核心的典型后部典型的后验产生样品。在这项工作中,我们为核心介绍了一个黑色框变分推理框架,该核心克服了这些约束,并可以将变异核心的原则应用于棘手的模型,例如贝叶斯神经网络。我们将技术应用于监督学习问题,并将它们与文献中的现有方法进行比较,以进行数据摘要和推理。

Recent advances in coreset methods have shown that a selection of representative datapoints can replace massive volumes of data for Bayesian inference, preserving the relevant statistical information and significantly accelerating subsequent downstream tasks. Existing variational coreset constructions rely on either selecting subsets of the observed datapoints, or jointly performing approximate inference and optimizing pseudodata in the observed space akin to inducing points methods in Gaussian Processes. So far, both approaches are limited by complexities in evaluating their objectives for general purpose models, and require generating samples from a typically intractable posterior over the coreset throughout inference and testing. In this work, we present a black-box variational inference framework for coresets that overcomes these constraints and enables principled application of variational coresets to intractable models, such as Bayesian neural networks. We apply our techniques to supervised learning problems, and compare them with existing approaches in the literature for data summarization and inference.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源