Exclusivity and paternalism in the public governance of explainable AI

Computer Law & Security Review(2021)

引用 5|浏览2
暂无评分
摘要
In this comment, we address the apparent exclusivity and paternalism of goal and standard setting for explainable AI and its implications for the public governance of AI. We argue that the widening use of AI decision-making, including the development of autonomous systems, not only poses widely-discussed risks for human autonomy in itself, but is also the subject of a standard-setting process that is remarkably closed to effective public contestation. The implications of this turn in governance for democratic decision-making in Britain have also yet to be fully appreciated. As the governance of AI gathers pace, one of the major tasks will be ensure not only that AI systems are technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are contestable and that governing institutions and processes are open to democratic contestability.
更多
查看译文
关键词
Artificial intelligence,Explainability,Trust,Governance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要