RTA-IR: A runtime assurance framework for behavior planning based on imitation learning and responsibility-sensitive safety model

Expert Systems with Applications(2023)

引用 0|浏览3
暂无评分
摘要
Current research on artificial intelligence (AI) algorithms in safety–critical areas remains extremely challenging due to their inability to be fully verified at design time. In this paper, we propose an RTA-IR architecture, which bypasses the formal verification of the AI algorithm by incorporating runtime assurance (RTA) and provides safety assurances for the AI controllers of complex autonomous vehicles (such as those obtained using neural networks) without excessive performance sacrifice. RTA-IR consists of a high-performance and unproven advanced controller and two verifiable safety controllers and a decision module designed based on the Responsibility Sensitive Safety Model (RSS). The advanced controller is designed based on attention generating adversarial imitation learning(GAIL), which can imitate the efficient policies of experts from a set of expert demonstrations. RSS provides verifiable safety criteria and switching logic for the decision module, and RTA-IR provides safety for autonomous vehicles when the advanced controller produces unsafe control, as well as restoring control of the vehicle by the advanced controller under conditions that confirm safety. We tested and evaluated RTA-IR separately for two levels of traffic density in one driving task. Experiments have shown that RTA-IR exhibits superior performance in terms of both safety and efficiency compared to the baseline method.
更多
查看译文
关键词
Runtime assurance framework, Responsibility-sensitive safety model, Generative adversarial imitation learning, Autonomous driving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要