An Improved Minimax-Q Algorithm Based on Generalized Policy Iteration to Solve a Chaser-Invader Game | |
Liu MS(刘民颂)1,2; Zhu YH(朱圆恒)1,2; Zhao DB(赵冬斌)1,2 | |
2020-07 | |
会议日期 | 2020-5 |
会议地点 | 线上 |
英文摘要 | In this paper, we use reinforcement learning and zero-sum games to solve a Chaser-Invader game, which is actually a Markov game (MG). Different from the single agent Markov Decision Process (MDP), MG can realize the interaction of multiple agents, which is an extension of game theory to a MDP environment. This paper proposes an improved algorithm based on the classical Minimax-Q algorithm. First, in order to solve the problem where Minimax-Q algorithm can only be applied for discrete and simple environment, we use Deep Q-network instead of traditional Q-learning. Second, we propose a generalized policy iteration to solve the zero-sum game. This method makes the agent use linear programming method to solve the Nash equilibrium action at each moment. Finally, through comparative experiments, we prove that the improved algorithm can perform as well as Monte Carlo Tree Search in simple environments and better than Monte Carlo Tree Search in complex environments. |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/58505] ![]() |
专题 | 复杂系统管理与控制国家重点实验室_深度强化学习 |
通讯作者 | Zhao DB(赵冬斌) |
作者单位 | 1.中国科学院大学人工智能学院 2.中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Liu MS,Zhu YH,Zhao DB. An Improved Minimax-Q Algorithm Based on Generalized Policy Iteration to Solve a Chaser-Invader Game[C]. 见:. 线上. 2020-5. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论