Socio-cognitive biases in folk AI ethics and risk discourse

AI Ethics(2021)

引用 2|浏览2
暂无评分
摘要
The ongoing conversation on AI ethics and politics is in full swing and has spread to the general public. Rather than contributing by engaging with the issues and views discussed, we want to step back and comment on the widening conversation itself. We consider evolved human cognitive tendencies and biases, and how they frame and hinder the conversation on AI ethics. Primarily, we describe our innate human capacities known as folk theories and how we apply them to phenomena of different implicit categories. Through examples and empirical findings, we show that such tendencies specifically affect the key issues discussed in AI ethics. The central claim is that much of our mostly opaque intuitive thinking has not evolved to match the nature of AI, and this causes problems in democratizing AI ethics and politics. Developing awareness of how our intuitive thinking affects our more explicit views will add to the quality of the conversation.
更多
查看译文
关键词
Artificial intelligence,Moral psychology,Moral philosophy,Machine ethics,Cognitive bias,Folk theories,Categorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要