Concurrent reinforcement learning as a rehearsal for decentralized planning under uncertainty

AAMAS(2013)

引用 2|浏览13
暂无评分
摘要
Dec-POMDPs are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Recently, reinforcement learning (RL) based approaches have been proposed for distributed solution of Dec-POMDPs without full prior knowledge of the model. These methods assume that agents have only local information available to them during the learning process, i.e. that conditions during learning and policy execution are identical. However, in practical scenarios this may not be the case, and agents may have difficulty learning under such unnecessary constraints. We propose a novel RL approach in which agents are allowed to \\emph{rehearse} with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on this information. We show experimentally that incorporating such information can ameliorate the difficulties faced by non-rehearsal-based learners, and demonstrate fast, (near) optimal performance on many existing benchmark Dec-POMDP problems. We also propose a new benchmark domain that is less abstract than existing domains and is designed to be particularly challenging to RL-based solvers, as a target for current and future research on RL solutions to Dec-POMDPs.
更多
查看译文
关键词
policy execution,full knowledge,decentralized planning,reinforcement learning,rl solution,full prior knowledge,new benchmark domain,prevalent dec-pomdp solution technique,novel rl approach,existing benchmark dec-pomdp problem,concurrent reinforcement,local information
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要