Multi-modal active perception for information gathering in science missions

Autonomous Robots(2019)

引用 22|浏览27
暂无评分
摘要
Robotic science missions in remote environments, such as deep ocean and outer space, can involve studying phenomena that cannot directly be observed using on-board sensors but must be deduced by combining measurements of correlated variables with domain knowledge. Traditionally, in such missions, robots passively gather data along prescribed paths, while inference, path planning, and other high level decision making is largely performed by a supervisory science team located at a different location, often at a great distance. However, communication constraints hinder these processes, and hence the rate of scientific progress. This paper presents an active perception approach that aims to reduce robots’ reliance on human supervision and improve science productivity by encoding scientists’ domain knowledge and decision making process on-board. We present a Bayesian network architecture to compactly model critical aspects of scientific knowledge while remaining robust to observation and modeling uncertainty. We then formulate path planning and sensor scheduling as an information gain maximization problem, and propose a sampling-based solution based on Monte Carlo tree search to plan informative sensing actions which exploit the knowledge encoded in the network. The computational complexity of our framework does not grow with the number of observations taken and allows long horizon planning in an anytime manner, making it highly applicable to field robotics with constrained computing. Simulation results show statistically significant performance improvements over baseline methods, and we validate the practicality of our approach through both hardware experiments and simulated experiments with field data gathered during the NASA Mojave Volatiles Prospector science expedition.
更多
查看译文
关键词
Informative sensing planning,Active perception,Robotic exploration,Space robotics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要