论文标题
双重稳健的分配在稳健的范围内评估和学习
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning
论文作者
论文摘要
非政策评估和学习(OPE/L)使用离线观察数据来做出更好的决策,这对于在线实验有限的应用至关重要。但是,完全取决于记录的数据,OPE/L对环境分布变化很敏感 - 数据生成环境与部署策略的差异。 \ citet {si2020distributional}提出了符合分布稳健的OPE/L(DROPE/L)来解决此问题,但该提案依赖于逆向加权,如果估计误差和遗憾将降低倾向,如果倾向是非参数估计的,并且其差异是次要估计的,即使不是次要的。对于标准的,非舒适的OPE/L,这是通过双重鲁棒(DR)方法来解决的,但它们并不自然地扩展到更复杂的Drope/L,这涉及最糟糕的期望。在本文中,我们提出了具有KL-Divergence不确定性集的DROPE/L的第一个DR算法。为了进行评估,我们提出了局部双重稳健的drope(LDR $^2 $ ope),并表明它在弱产品速率条件下实现了半参数效率。借助本地化技术,LDR $^2 $ OPE仅需要安装少量回归,就像标准OPE的DR方法一样。为了学习,我们提出了连续偶发性的稳健下降(CDR $^2 $ opl),并表明,在涉及连续回归的产品速率条件下,它具有$ \ Mathcal {o} \ left(n^{ - 1/2}} \ right)的快速后悔率,即使在未知的偏向上也是不相同的估计。我们从经验上验证了模拟中的算法,并将结果进一步扩展到一般$ f $ divergence的不确定性集。
Off-policy evaluation and learning (OPE/L) use offline observational data to make better decisions, which is crucial in applications where online experimentation is limited. However, depending entirely on logged data, OPE/L is sensitive to environment distribution shifts -- discrepancies between the data-generating environment and that where policies are deployed. \citet{si2020distributional} proposed distributionally robust OPE/L (DROPE/L) to address this, but the proposal relies on inverse-propensity weighting, whose estimation error and regret will deteriorate if propensities are nonparametrically estimated and whose variance is suboptimal even if not. For standard, non-robust, OPE/L, this is solved by doubly robust (DR) methods, but they do not naturally extend to the more complex DROPE/L, which involves a worst-case expectation. In this paper, we propose the first DR algorithms for DROPE/L with KL-divergence uncertainty sets. For evaluation, we propose Localized Doubly Robust DROPE (LDR$^2$OPE) and show that it achieves semiparametric efficiency under weak product rates conditions. Thanks to a localization technique, LDR$^2$OPE only requires fitting a small number of regressions, just like DR methods for standard OPE. For learning, we propose Continuum Doubly Robust DROPL (CDR$^2$OPL) and show that, under a product rate condition involving a continuum of regressions, it enjoys a fast regret rate of $\mathcal{O}\left(N^{-1/2}\right)$ even when unknown propensities are nonparametrically estimated. We empirically validate our algorithms in simulations and further extend our results to general $f$-divergence uncertainty sets.