论文标题

顺序信息设计:马尔可夫说服过程及其有效的强化学习

Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning

论文作者

Wu, Jibang, Zhang, Zixuan, Feng, Zhe, Wang, Zhaoran, Yang, Zhuoran, Jordan, Michael I., Xu, Haifeng

论文摘要

在当今的经济中,对于互联网平台而言,考虑顺序信息设计问题的重要性是将其长期兴趣与演出服务提供商的激励措施保持一致。本文提出了一个新型的顺序信息设计模型,即马尔可夫说服过程(MPP),其中发件人具有信息优势,试图说服一系列近视接收者采取的动作,以最大程度地利用发件人在有限的地平线上的Markovian Markovian环境中具有varying先验和实用性的功能。因此,MPP的计划面临着独特的挑战,即找到同时说服近视接收器的信号策略,并诱发发件人的最佳长期累积公用事业。然而,在已知模型的人群级别中,事实证明,我们可以通过修改后的贝尔曼方程式,有效地确定具有有限(无限)状态和结果的最佳($ε$ - 最佳)策略和结果。 我们的主要技术贡献是在在线强化学习(RL)设置下研究MPP,其目标是通过与基础MPP互动来学习最佳信号策略,而不必了解发件人的实用功能,先前的分布和Markov Transition Transition Kernels。我们设计了一种有效的无重格学习算法,这是说服过程的乐观态度原则(OP4),该原则具有乐观和悲观原则的新颖组合。我们的算法通过实现sublinear $ \ sqrt {t} $来享有样品效率 - 遗憾的上限。此外,我们的算法和理论都可以通过函数近似应用于具有较大结果和状态的MPP,我们在线性设置下展示了如此成功。

In today's economy, it becomes important for Internet platforms to consider the sequential information design problem to align its long term interest with incentives of the gig service providers. This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs), where a sender, with informational advantage, seeks to persuade a stream of myopic receivers to take actions that maximizes the sender's cumulative utilities in a finite horizon Markovian environment with varying prior and utility functions. Planning in MPPs thus faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender. Nevertheless, in the population level where the model is known, it turns out that we can efficiently determine the optimal (resp. $ε$-optimal) policy with finite (resp. infinite) states and outcomes, through a modified formulation of the Bellman equation. Our main technical contribution is to study the MPP under the online reinforcement learning (RL) setting, where the goal is to learn the optimal signaling policy by interacting with with the underlying MPP, without the knowledge of the sender's utility functions, prior distributions, and the Markov transition kernels. We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles. Our algorithm enjoys sample efficiency by achieving a sublinear $\sqrt{T}$-regret upper bound. Furthermore, both our algorithm and theory can be applied to MPPs with large space of outcomes and states via function approximation, and we showcase such a success under the linear setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源