Distributionally robust chance-constrained Markov decision processes

arxiv(2022)

引用 0|浏览5
暂无评分
摘要
Markov decision process (MDP) is a decision making framework where a decision maker is interested in maximizing the expected discounted value of a stream of rewards received at future stages at various states which are visited according to a controlled Markov chain. Many algorithms including linear programming methods are available in the literature to compute an optimal policy when the rewards and transition probabilities are deterministic. In this paper, we consider an MDP problem where the transition probabilities are known and the reward vector is a random vector whose distribution is partially known. We formulate the MDP problem using distributionally robust chance-constrained optimization framework under various types of moments based uncertainty sets, and statistical-distance based uncertainty sets defined using phi-divergence and Wasserstein distance metric. For each type of uncertainty set, we consider the case where a random reward vector has either a full support or a nonnegative support. For the case of full support, we show that the distributionally robust chance-constrained Markov decision process is equivalent to a second-order cone programming problem for the moments and phi-divergence distance based uncertainty sets, and it is equivalent to a mixed-integer second-order cone programming problem for an Wasserstein distance based uncertainty set. For the case of nonnegative support, it is equivalent to a copositive optimization problem and a biconvex optimization problem for the moments based uncertainty sets and Wasserstein distance based uncertainty set, respectively. As an application, we study a machine replacement problem and illustrate numerical experiments on randomly generated instances.
更多
查看译文
关键词
chance-constrained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要