论文标题
无限维优化和随机微分方程的贝叶斯非参数学习
Infinite-dimensional optimization and Bayesian nonparametric learning of stochastic differential equations
论文作者
论文摘要
该论文有两个主要主题。本文的第一部分为希尔伯特空间上的无限维优化问题建立了一定的一般结果。这些结果涵盖了经典代表定理及其许多变体作为特殊情况,并提供了更广泛的应用程序范围。然后,本文的第二部分通过将第一部分的结果与贝叶斯分层框架整合在一起,开发了一种系统的方法来学习随机微分方程的漂移函数。重要的是,我们的Baysian方法通过适当使用收缩率进行了低成本的稀疏学习,同时允许通过后验分布适当地量化不确定性。最后,几个示例说明了我们学习计划的准确性。
The paper has two major themes. The first part of the paper establishes certain general results for infinite-dimensional optimization problems on Hilbert spaces. These results cover the classical representer theorem and many of its variants as special cases and offer a wider scope of applications. The second part of the paper then develops a systematic approach for learning the drift function of a stochastic differential equation by integrating the results of the first part with Bayesian hierarchical framework. Importantly, our Baysian approach incorporates low-cost sparse learning through proper use of shrinkage priors while allowing proper quantification of uncertainty through posterior distributions. Several examples at the end illustrate the accuracy of our learning scheme.