Double DQN-based Power System Transient Stability Emergency Control with Protection Coordinations

2023 IEEE 6th International Electrical and Energy Conference (CIEEC)(2023)

引用 0|浏览1
暂无评分
摘要
With the development of modern power grid, the goal of achieving higher efficiency and profit, the power systems are operated near the stability and security boundaries more often than before. Under this background, the better emergency control methods need to be deployed to better protect the system. In this work, a deep reinforcement learning based power system transient stability emergency control method is proposed. The proposed method adopts the double deep Q network structure to prevent the over estimation of potential rewards to make the emergency control strategy more reliable. In order to further improve the performance, the protection system action information is used and coordinated with the emergency control action. In addition, to solve the curse of dimensionality issue for large systems, the curriculum learning framework is used to train the deep reinforcement learning algorithm. The proposed transient stability method is tested with a two area four machine system, and a benchmark IEEE 16 machine 68 bus systems.
更多
查看译文
关键词
transient stability,emergency control strategy,deep reinforcement learning,protection coordination,curriculum learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要