Assuring AI safety: fallible knowledge and the Gricean maxims

AI and Ethics(2024)

引用 0|浏览0
暂无评分
摘要
In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.
更多
查看译文
关键词
Safety assurance,Artificial intelligence,Epistemology,Gricean maxims,Safety communication
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要