论文标题

对伽马米尼马克的对抗性元学习,以利用先验知识

Adversarial Meta-Learning of Gamma-Minimax Estimators That Leverage Prior Knowledge

论文作者

Qiu, Hongxiang, Luedtke, Alex

论文摘要

众所周知,贝叶斯估计器提供了一种可以通过单个先前分布来表达的先验知识的方法。但是,当这些知识太模糊以至于无法以单一的先验表达时,需要一种替代方法。 Gamma-Minimax估计器提供了这样一种方法。这些估计器将与可用知识兼容的先前分布的一组$γ$中最大程度地降低了最差的贝叶斯风险。传统上,针对参数模型定义了伽马米型。在这项工作中,我们为一般模型定义了伽马米纳克斯估计器,并提出了对抗性元学习算法,以在一组先前的分布受到通用矩约束时对其进行计算。还提供了伴随的融合保证。我们还引入了一个神经网络类,该类别提供了可以从中选择伽马米尼马克斯估计器的丰富但有限的估计值。我们在两个设置中说明了我们的方法,即熵估计和生物多样性研究中出现的预测问题。

Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set $Γ$ of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源