Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility

arxiv(2022)

引用 1|浏览9
暂无评分
摘要
Responsible AI guidelines often ask engineers to consider how their systems might harm. However, contemporary AI systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible AI practice? In interviews with 27 AI engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible AI guidelines to be within their agency, capability, or responsibility to address. We use Lucy Suchman's notion of located accountability to show how responsible AI labor is currently organized, and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible AI actions do take place, and which are relegated to low status staff or believed to be the work of the next or previous person in the chain. We argue that current responsible AI interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could improve by taking a located accountability approach, where relations and obligations intertwine and incrementally add value in the process. This would constitute a shift from "supply chain' thinking to "value chain" thinking.
更多
查看译文
关键词
Modularity, software engineering, supply chain, artificial intelligence, ethics, located accountability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要