Reinforcement Twinning: from digital twins to model-based reinforcement learning
arxiv(2023)
摘要
We propose a novel framework for simultaneously training the digital twin of
an engineering system and an associated control agent. The training of the twin
combines methods from data assimilation and system identification, while the
training of the control agent combines model-based optimal control and
model-free reinforcement learning. The combined training of the control agent
is achieved by letting it evolve independently along two paths (one driven by a
model-based optimal control and another driven by reinforcement learning) and
using the virtual environment offered by the digital twin as a playground for
confrontation and indirect interaction. This interaction occurs as an "expert
demonstrator", where the best policy is selected for the interaction with the
real environment and "taught" to the other if the independent training
stagnates. We refer to this framework as Reinforcement Twinning (RT). The
framework is tested on three vastly different engineering systems and control
tasks, namely (1) the control of a wind turbine subject to time-varying wind
speed, (2) the trajectory control of flapping-wing micro air vehicles (FWMAVs)
subject to wind gusts, and (3) the mitigation of thermal loads in the
management of cryogenic storage tanks. The test cases are implemented using
simplified models for which the ground truth on the closure law is available.
The results show that the adjoint-based training of the digital twin is
remarkably sample-efficient and completed within a few iterations. Concerning
the control agent training, the results show that the model-based and the
model-free control training benefit from the learning experience and the
complementary learning approach of each other. The encouraging results open the
path towards implementing the RT framework on real systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要