Risk filtering and risk-averse control of Markovian systems subject to model uncertainty

arxiv(2023)

引用 0|浏览2
暂无评分
摘要
We consider a Markov decision process subject to model uncertainty in a Bayesian framework, where we assume that the state process is observed but its law is unknown to the observer. In addition, while the state process and the controls are observed at time t , the actual cost that may depend on the unknown parameter is not known at time t . The controller optimizes the total cost by using a family of special risk measures, called risk filters, that are appropriately defined to take into account the model uncertainty of the controlled system. These key features lead to non-standard and non-trivial risk-averse control problems, for which we derive the Bellman principle of optimality. We illustrate the general theory on two practical examples: clinical trials and optimal investment.
更多
查看译文
关键词
Markov decision processes,Model uncertainty,Dynamic measures of risk,Dynamic programming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要