Dynamic Parallel Machine Scheduling With Deep Q-Network

IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS(2023)

引用 0|浏览5
暂无评分
摘要
Parallel machine scheduling (PMS) is a common setting in many manufacturing facilities, in which each job is allowed to be processed on one of the machines of the same type. It involves scheduling $n$ jobs on $m$ machines to minimize certain objective functions. For preemptive scheduling, most problems are not only NP-hard but also difficult in practice. Moreover, many unexpected events, such as machine failure and requirement change, are inevitable in the practical production process, meaning that rescheduling is required for static scheduling methods. Deep reinforcement learning (DRL), which combines deep learning and reinforcement learning, has achieved promising results in several domains and has shown the potential to solve large Markov decision process (MDP) optimization tasks. Moreover, PMS problems can be formulated as an MDP problem, inspiring us to devise a DRL method to deal with PMS problems in a dynamic environment. We develop a novel DRL-based PMS method, called DPMS, in which the developed model considers the characteristics of PMS to design states and the reward. The actions involve dispatching rules, so DPMS can be considered a meta-dispatching-rule system that can efficiently select a sequence of dispatching rules based on the current environment or unexpected events. The experimental results demonstrate that DPMS can yield promising results in a dynamic environment by learning from the interactions between the agent and the environment. Furthermore, we conduct extensive experiments to analyze DPMS in the context of developing a DRL to deal with dynamic PMS problems.
更多
查看译文
关键词
Deep reinforcement learning (DRL),deep Q-network (DQN),DRL-based PMS (DPMS),Markov decision process (MDP),parallel machine scheduling (PMS)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要