POMDP Manipulation Planning under Object Composition Uncertainty

arxiv(2020)

引用 0|浏览33
暂无评分
摘要
Manipulating unknown objects in a cluttered environment is difficult because segmentation of the scene into objects, that is, object composition is uncertain. Due to this uncertainty, earlier work has concentrated on either identifying the "best" object composition and deciding on manipulation actions accordingly, or, tried to greedily gather information about the "best" object composition. Contrary to earlier work, we 1) utilize different possible object compositions in planning, 2) take advantage of object composition information provided by robot actions, 3) take into account the effect of different competing object hypotheses on the actual task to be performed. We cast the manipulation planning problem as a partially observable Markov decision process (POMDP) which plans over possible hypotheses of object compositions. The POMDP model chooses the action that maximizes the long-term expected task specific utility, and while doing so, considers the value of informative actions and the effect of different object hypotheses on the completion of the task. In simulations and in experiments with an RGB-D sensor, a Kinova Jaco and a Franka Emika Panda robot arm, a probabilistic approach outperforms an approach that only considers the most likely object composition and long term planning outperforms greedy decision making.
更多
查看译文
关键词
object composition uncertainty,planning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要