Interpretable Run-Time Monitoring And Replanning For Safe Autonomous Systems Operations

IEEE ROBOTICS AND AUTOMATION LETTERS(2020)

引用 6|浏览8
暂无评分
摘要
Autonomous robots, especially aerial vehicles, when subject to disturbances, uncertainties, and noises may experience variations from their desired states and deviations from the planned trajectory which may lead them into an unsafe state (e.g., a collision). It is thus necessary to monitor their states at run-time when operating in uncertain and cluttered environments and intervene to guarantee their and the surrounding's safety. While Reachability Analysis (RA) has been successfully used to provide safety guarantees, it doesn't provide explanations on why a system is predicted to be unsafe and what type of corrective actions to perform to change the decision. In this work we propose a novel approach for run-time monitoring that leverages a library of previously observed trajectories together with decision tree theory to predict if the system will be safe/unsafe and provide an explanation to understand the causes of the prediction. We design an interpretable monitor that checks at run-time if the vehicle may become unsafe and plan safe corrective actions if found unsafe. For each prediction, we provide a logical explanation - a decision rule - that includes information about the causes that lead to the predicted safety decision. The explanation also includes a set of counterfactual rules that shows what system variables may bring the system to the opposite safety decision, if changed. We leverage such an explanation to plan corrective actions that always keep the vehicle safe. Our technique is validated both with simulations and experiments on a quadrotor UAV in cluttered environments under the effect of previously untrained disturbances.
更多
查看译文
关键词
Motion and path planning, aerial systems, applications, collision avoidance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要