Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes.

ADVANCES IN ARTIFICIAL INTELLIGENCE, SOFTWARE AND SYSTEMS ENGINEERING(2020)

引用 4|浏览6
暂无评分
摘要
Algorithmic assurances assist human users in trusting advanced autonomous systems appropriately. This work explores one approach to creating assurances in which systems self-assess their decision-making capabilities, resulting in a 'self-confidence' measure. We present a framework for self-confidence assessment and reporting using meta-analysis factors, and then develop a new factor pertaining to 'solver quality' in the context of solving Markov decision processes (MDPs), which are widely used in autonomous systems. A novel method for computing solver quality self-confidence is derived, drawing inspiration from empirical hardness models. Numerical examples show our approach has desirable properties for enabling an MDP-based agent to self-assess its performance for a given task under different conditions. Experimental results for a simulated autonomous vehicle navigation problem show significantly improved delegated task performance outcomes in conditions where self-confidence reports are provided to users.
更多
查看译文
关键词
Human-Machine systems,Artificial intelligence,Self-assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要