谷歌浏览器插件
订阅小程序
在清言上使用

Can AI governance be progressive? Group interests, group privacy and abnormal justice

crossref(2022)

引用 0|浏览0
暂无评分
摘要
The evolution of big data into a policy tool came with pre-big-data assumptions about privacy and data reuse. In the 2010s this model broke down irretrievably as big data became a tool for intervening on the public on a mass scale. This chapter traces the ensuing institutional responses, in both the public and private sectors, of self-regulation through guidelines and notions of 'responsible data' and shows how they break down when faced with the politics of inclusion and exclusion, and with questions of downstream impacts on sustainability and other structural questions of justice. I argue that Fraser's lens of 'abnormal justice' presents a more useful lens than responsibility in relation to technologies of automation, as it demands understanding of the self-defined interests of affected groups. Integrating these concerns into AI governance requires an approach based on Mouffe's notion of agonistic pluralism in order to can take into account different views of what AI may, and should, do in the world.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要