Planning with Intermittent State Observability: Knowing When to Act Blind

2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2022)

引用 1|浏览10
暂无评分
摘要
Contemporary planning models and methods often rely on constant availability of free state information at each step of execution. However, autonomous systems are increasingly deployed in the open world where state information may be costly or simply unavailable in certain situations. Failing to account for sensor limitations may lead to costly behavior or even catastrophic failure. While the partially observable Markov decision process (POMDP) can be used to model this problem, solving POMDPs is often intractable. We introduce a planning model called a semi-observable Markov decision process (SOMDP) specifically designed for MDPs where state observability may be intermittent. We propose an approach for solving SOMDPs that uses memory states to proactively plan for the potential loss of sensor information while exploiting the unique structure of SOMDPs. Our theoretical analysis and empirical evaluation demonstrate the advantages of SOMDPs relative to existing planning models.
更多
查看译文
关键词
autonomous systems,catastrophic failure,constant availability,contemporary planning models,costly behavior,free state information,intermittent state observability,memory states,open world,partially observable Markov decision process,planning model,POMDP,semiobservable Markov decision process,sensor information,sensor limitations,SOMDP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要