Computational Framework For Verifiable Decisions Of Self-Driving Vehicles

2018 IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS (CCTA)(2018)

引用 2|浏览39
暂无评分
摘要
A framework is presented for the verification of an agent's decision making in autonomous driving applications by checking the logic of the agent for instability and inconsistency. The framework verifies the decisions of a rational agent implemented in Natural Language Programming (NLP) and based on a belief-desire-intention (BDI) paradigm using sEnglish and Jason code. The main results are methods of verification for the correctness of real-time agent decisions expressed in computational tree logic (CTL) formulae. The methods rely on the Model Checker for Multi-Agent Systems (MCMAS) verification tool. To test the new verification system, an autonomous vehicle (AV) has been modelled and simulated, which is capable of planning, navigation, objects detection and obstacle avoidance using a rational agent. The agent's decisions are based on information received from mono-cameras and LiDAR sensor that feed into logic-based decisions of the AV. The model of the AV and its environment has been implemented in the Robot Operating System (ROS) and the Gazebo virtual reality simulator.
更多
查看译文
关键词
computational framework,self-driving vehicles,autonomous driving applications,rational agent,belief-desire-intention paradigm,real-time agent decisions,computational tree logic formulae,verification system,autonomous vehicle,objects detection,obstacle avoidance,logic-based decisions,model checker,robot operating system,multiagent systems verification tool,sEnglish code,Jason code,natural language programming,Gazebo virtual reality simulator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要