谷歌浏览器插件
订阅小程序
在清言上使用

Defining the scope of AI ADM system risk assessment

Edward Elgar Publishing eBooks(2022)

引用 0|浏览3
暂无评分
摘要
Guidance documents for technology governance and data protection often use broad terms such as Artificial Intelligence (AI). This is problematic; the term 'AI' is inherently ambiguous, and it is difficult to tease out the nuances in the 'grey areas' between AI techniques and/or automated decision-making (ADM) processes. We use four illustrative examples to demonstrate that the categorisation gives only partial information about each system's risk profile. We argue that organisations should adopt risk-oriented approaches to identify system risks that extend beyond technology classification as AI or non-AI. Organisational governance processes should entail a more holistic assessment of system risk: rather than relying on 'top-down' categorisations of the technologies employed, they should apply a 'bottom-up' risk identification process that enables a more effective identification of appropriate controls and mitigation strategies.
更多
查看译文
关键词
risk assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要