论文标题

随机优化的估计和推断

Estimation and Inference by Stochastic Optimization

论文作者

Forneron, Jean-Jacques

论文摘要

在非线性估计中,通常通过自举推断评估采样不确定性。对于复杂的模型,这可以是计算密集型的。本文将优化与重新采样相结合:将随机优化变成快速的重采样设备。引入了两种方法:重采样牛顿 - 拉夫森(RNR)和重新采样的准牛顿(RQN)算法。两者都会产生可用于计算单一运行中一致的估计,置信区间和标准错误的抽奖。抽奖是由梯度和黑森州(或近似)生成的,该梯度是根据每次迭代重新采样的数据批次计算得出的。当物镜平滑而严格凸出时,提出的方法从优化迅速过渡到重新采样。模拟和经验应用说明了大规模和计算密集型问题的方法的特性。与频繁主义者和贝叶斯方法的比较突出了算法的特征。

In non-linear estimations, it is common to assess sampling uncertainty by bootstrap inference. For complex models, this can be computationally intensive. This paper combines optimization with resampling: turning stochastic optimization into a fast resampling device. Two methods are introduced: a resampled Newton-Raphson (rNR) and a resampled quasi-Newton (rqN) algorithm. Both produce draws that can be used to compute consistent estimates, confidence intervals, and standard errors in a single run. The draws are generated by a gradient and Hessian (or an approximation) computed from batches of data that are resampled at each iteration. The proposed methods transition quickly from optimization to resampling when the objective is smooth and strictly convex. Simulated and empirical applications illustrate the properties of the methods on large scale and computationally intensive problems. Comparisons with frequentist and Bayesian methods highlight the features of the algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源