A Soft Graph Attention Reinforcement Learning for Multi-Agent Cooperation
Huimu Wang1,2; Zhiqiang Pu1,2; Zhen Liu2; Jianqiang Yi1,2; Tenghai Qiu2
2020-08
会议日期2020-8
会议地点线上
英文摘要

The multi-agent reinforcement learning (MARL) suffers from several issues when it is applied to large-scale environments. Specifically, the communication among the agents is limited by the communication distance or bandwidth. Besides, the interactions among the agents are complex in large-scale environments, which makes each agent hard to take different influences of the other agents into consideration and to learn
a stable policy. To address these issues, a soft graph attention reinforcement learning (SGA-RL) is proposed. By taking the advantage of the chain propagation characteristics of graph neural networks, stacked graph convolution layers can overcome the limitation of the communication and enlarge the agents’ receptive field to promote the cooperation behavior among the agents. Moreover, unlike traditional multi-head attention mechanism which takes all the heads into consideration equally, a soft attention mechanism is designed to learn each attention head’s importance, which means that each agent can learn how to treat the other agents’ influence more effectively during large-scale
environments. The results of the simulations indicate that the agents can learn stable and complicated cooperative strategies with SGA-RL in large-scale environments.
 

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/44950]  
专题综合信息系统研究中心_飞行器智能技术
通讯作者Zhiqiang Pu
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Huimu Wang,Zhiqiang Pu,Zhen Liu,et al. A Soft Graph Attention Reinforcement Learning for Multi-Agent Cooperation[C]. 见:. 线上. 2020-8.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace