论文标题

学会通过随机的优势限制来优化

Learning to Optimize with Stochastic Dominance Constraints

论文作者

Dai, Hanjun, Xue, Yuan, He, Niao, Wang, Bethany, Li, Na, Schuurmans, Dale, Dai, Bo

论文摘要

在现实世界的决策中,不确定性很重要但难以处理。随机优势提供了一种比较不确定数量的理论上合理的方法,但是使用随机优势约束的优化通常在计算上昂贵,这限制了实际适用性。在本文中,我们为问题开发了一种简单而有效的方法,即光随机支配者(Light-SD),该方法利用了拉格朗日的有用属性。我们将Lagrangian中的内部优化作为替代近似的学习问题,它绕过明显的棘手性,并导致可拖动更新甚至封闭形式的解决方案以进行梯度计算。我们证明了算法的融合并经验测试。拟议的Light-SD表明了从财务到供应链管理的几个代表性问题上的出色表现。

In real-world decision-making, uncertainty is important yet difficult to handle. Stochastic dominance provides a theoretically sound approach for comparing uncertain quantities, but optimization with stochastic dominance constraints is often computationally expensive, which limits practical applicability. In this paper, we develop a simple yet efficient approach for the problem, the Light Stochastic Dominance Solver (light-SD), that leverages useful properties of the Lagrangian. We recast the inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability and leads to tractable updates or even closed-form solutions for gradient calculations. We prove convergence of the algorithm and test it empirically. The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源