Bandit Approach to Conflict-Free Parallel Q-Learning in View of Photonic Implementation

Intelligent Computing(2023)

引用 0|浏览0
暂无评分
摘要
Recently, extensive studies on photonic reinforcement learning to accelerate the process of calculation by exploiting the physical nature of light have been conducted. Previous studies utilized quantum interference of photons to achieve collective decision-making without choice conflicts when solving the competitive multi-armed bandit problem, a fundamental example in reinforcement learning. However, the bandit problem deals with a static environment where the agent’s actions do not influence the reward probabilities. This study aims to extend the conventional approach to a more general type of parallel reinforcement learning targeting the grid world problem. Unlike the conventional approach, the proposed scheme deals with a dynamic environment where the reward changes because of the agent’s actions. A successful photonic reinforcement learning scheme requires both a photonic system that contributes to the quality of learning and a suitable algorithm. This study proposes a novel learning algorithm, a modified bandit Q-learning method, in view of a potential photonic implementation. Here, state–action pairs in the environment are regarded as slot machines in the context of the bandit problem and a change in Q -value is regarded as the reward of the bandit problem. We perform numerical simulations to validate the effectiveness of the bandit algorithm. In addition, we propose a parallel architecture in which multiple agents are indirectly connected through quantum interference of light and quantum principles ensure the conflict-free property of state–action pair selections among agents. We demonstrate that parallel reinforcement learning can be accelerated owing to conflict avoidance among multiple agents.
更多
查看译文
关键词
photonic implementation,conflict-free,q-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要