A Generic Markov Decision Process Model and Reinforcement Learning Method for Scheduling Agile Earth Observation Satellites

IEEE Transactions on Systems, Man, and Cybernetics: Systems(2022)

引用 43|浏览141
暂无评分
摘要
We investigate a general solution based on reinforcement learning for the agile satellite scheduling problem. The core idea of this method is to determine a value function for evaluating the long-term benefit under a certain state by training from experiences, and then apply this value function to guide decisions in unknown situations. First, the process of agile satellite scheduling is modeled as a finite Markov decision process with continuous state space and discrete action space. Two subproblems of the agile Earth observation satellite scheduling problem, i.e., the sequencing problem and the timing problem are solved by the part of the agent and the environment in the model, respectively. A satisfactory solution of the timing problem can be quickly produced by a constructive heuristic algorithm. The objective function of this problem is to maximize the total reward of the entire scheduling process. Based on the above design, we demonstrate that Q-network has advantages in fitting the long-term benefit of such problems. After that, we train the Q-network by Q-learning. The experimental results show that the trained Q-network performs efficiently to cope with unknown data, and can generate high total profit in a short time. The method has good scalability and can be applied to different types of satellite scheduling problems by customizing only the constraints checking process and reward signals.
更多
查看译文
关键词
Agile Earth observation satellite (AEOS),Markov decision process (MDP),Q-learning,reinforcement learning (RL),task scheduling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要