Neural Operators for Bypassing Gain and Control Computations in PDE Backstepping

arxiv(2023)

引用 1|浏览22
暂无评分
摘要
We introduce a framework for eliminating the computation of controller gain functions in PDE control. We learn the nonlinear operator from the plant parameters to the control gains with a (deep) neural network. We provide closed-loop stability guarantees (global exponential) under an NN-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a DeepONet approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions. Once proven to exist, learning of the NO is standard, completed "once and for all" (never online) and the kernel integral equation doesn't need to be solved ever again, for any new functional coefficient not exceeding the magnitude of the functional coefficients used for training. We also present an extension from approximating the gain kernel operator to approximating the full feedback law mapping, from plant parameter functions and state measurement functions to the control input, with semiglobal practical stability guarantees. Simulation illustrations are provided and code is available on github. This framework, eliminating real-time recomputation of gains, has the potential to be game changing for adaptive control of PDEs and gain scheduling control of nonlinear PDEs. The paper requires no prior background in machine learning or neural networks.
更多
查看译文
关键词
control computations,pde,bypassing gain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要