论文标题

使用神经网络的生成对抗性模仿学习:全球最优性和收敛速率

Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate

论文作者

Zhang, Yufeng, Cai, Qi, Yang, Zhuoran, Wang, Zhaoran

论文摘要

生成的对抗性模仿学习(GAIL)在实践中表现出巨大的成功,尤其是与神经网络相结合时。与强化学习不同,盖尔从专家(人类)演示中学习政策和奖励功能。尽管取得了经验的成功,但尚不清楚与神经网络的Gail是否会收敛到全球最佳解决方案。主要困难来自非Convex-Nonconcave minimax优化结构。为了弥合实践和理论之间的差距,我们分析了一种基于梯度的算法,并通过交替的更新来建立其额线收敛到全球最佳解决方案。据我们所知,我们的分析首次确定了Gail与神经网络的全球最优性和收敛率。

Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源