论文标题
遗憾的是在基于模拟的游戏中学习平衡的修剪
Regret Pruning for Learning Equilibria in Simulation-Based Games
论文作者
论文摘要
近年来,经验游戏理论分析(EGTA)已成为分析游戏的强大工具,在该游戏中,公用事业的确切规范无法使用。相反,EGTA假设访问Oracle,即模拟器,它可以生成策略配置文件,可以生成无偏见的玩家实用程序的嘈杂样本。因此,可以通过反复查询模拟器来从经验上估算实用程序。最近,已经提出了各种进行性抽样(PS)算法,旨在使用尽可能少的模拟器查询来生成PAC风格的学习保证(例如,具有很高概率的NASH均衡)。最近的一项工作引入了一种修剪技术,称为“遗憾”,该技术进一步最大程度地减少了PS算法中旨在学习纯Nash Equilibria的模拟器查询数量。在本文中,我们解决了这种原始的遗憾修剪方法的严重限制 - 只能确保经验游戏的真正纯净纳什均衡是真实游戏的近似平衡,并且无法为近似纯纳什均衡的效力提供任何有力的保证。这是一个重要的局限性,因为在许多游戏中,纯净的纳什平衡在计算上很难找到甚至不存在。我们介绍了三种小说的遗憾修剪变化。前两个变体概括了原始的遗憾修剪方法,以产生大约经验游戏的纯纳什均衡的保证。对于所有经验游戏的所有近似混合的纳什均衡,第三个变化甚至可以产生强大的保证。我们使用这些遗憾的修剪变化来设计两种新型的渐进抽样算法PS-REG+和PS-REG-M,它们在实验上优于以前的最新算法,用于学习基于模拟的游戏的纯和混合平衡。
In recent years, empirical game-theoretic analysis (EGTA) has emerged as a powerful tool for analyzing games in which an exact specification of the utilities is unavailable. Instead, EGTA assumes access to an oracle, i.e., a simulator, which can generate unbiased noisy samples of players' unknown utilities, given a strategy profile. Utilities can thus be empirically estimated by repeatedly querying the simulator. Recently, various progressive sampling (PS) algorithms have been proposed, which aim to produce PAC-style learning guarantees (e.g., approximate Nash equilibria with high probability) using as few simulator queries as possible. One recent work introduces a pruning technique called regret-pruning which further minimizes the number of simulator queries placed in PS algorithms which aim to learn pure Nash equilibria. In this paper, we address a serious limitation of this original regret pruning approach -- it is only able to guarantee that true pure Nash equilibria of the empirical game are approximate equilibria of the true game, and is unable to provide any strong guarantees regarding the efficacy of approximate pure Nash equilibria. This is a significant limitation since in many games, pure Nash equilibria are computationally intractable to find, or even non-existent. We introduce three novel regret pruning variations. The first two variations generalize the original regret pruning approach to yield guarantees for approximate pure Nash equilibria of the empirical game. The third variation goes further to even yield strong guarantees for all approximate mixed Nash equilibria of the empirical game. We use these regret pruning variations to design two novel progressive sampling algorithms, PS-REG+ and PS-REG-M, which experimentally outperform the previous state-of-the-art algorithms for learning pure and mixed equilibria, respectively, of simulation-based games.