论文标题

通过连续治疗迈向R-Gearner

Towards R-learner with Continuous Treatments

论文作者

Zhang, Yichi, Kong, Dehan, Yang, Shu

论文摘要

R-Learner由于其灵活性和效率在估计条件平均治疗效果方面广泛用于因果推断。但是,将R-LEARNER框架从二进制扩展到连续处理会引入非可识别性问题,因为在连续治疗下,在R-LOSS中不能直接施加条件平均治疗效果的功能零约束。为了解决这个问题,我们提出了一个两步识别策略:我们首先通过Tikhonov正则化确定中介功能,然后使用零受限的操作员恢复条件平均治疗效果。在此策略的基础上,开发了$ \ ell_2 $ regarlized R-Learner框架,以估算连续治疗的有条件平均治疗效果。新框架可容纳现代灵活的机器学习算法,以估计滋扰功能和目标估计。当目标估计值通过B-Splines近似(包括错误率,渐近正态性和置信区间)近似时,将证明理论特性。

The R-learner is widely used in causal inference due to its flexibility and efficiency in estimating the conditional average treatment effect. However, extending the R-learner framework from binary to continuous treatments introduces a non-identifiability issue, as the functional zero constraint inherent to the conditional average treatment effect cannot be directly imposed in the R-loss under continuous treatments. To address this, we propose a two-step identification strategy: we first identify an intermediary function via Tikhonov regularization, and then recover the conditional average treatment effect using a zero-constraining operator. Building on this strategy, an $\ell_2$-regularized R-learner framework is developed to estimate the conditional average treatment effect for continuous treatments. The new framework accommodates modern, flexible machine learning algorithms to estimate both nuisance functions and target estimand. Theoretical properties are demonstrated when the target estimand is approximated by sieve approximation with B-splines, including error rates, asymptotic normality, and confidence intervals.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源