A risk governance framework for healthcare decision support systems based on socio-technical analysis

decision support systems(2020)

引用 0|浏览2
暂无评分
摘要
We are developing an Artificial Intelligence (AI) risk governance framework based on human factors and AI governance principles to make automated healthcare decision-support safer and more accountable. Today, the healthcare system is facing a huge overload in reporting, which has made manual processing and comprehensive decision-making impossible. Emerging advances in AI and especially Natural Language Processing seem an attractive answer to human limitations in processing high volumes of reports. However, there are known risks to automation, including the risk in change of deploying AI itself into organisations, emotions, and ethics, which are rarely taken into consideration when making AI-based decisions. To explore this, we will first construct a Decision Support System (DSS) tool based on a knowledge graph extracted from real-world healthcare reports. Then, the tool will be deployed in a controlled manner in a hospital and its operation will be analysed using an established socio-technical methodology developed by the Centre for Innovative Human Systems in Trinity College Dublin over 25 years of research. We will contribute by integrating computer science with organizational psychology and the use of human factors methods to identify the impact of AI-based healthcare DSS, their associated risks, and the ethical and legal challenges. We hypothesize that collaborating with the organisational psychologists to consider the global system of human decision-making and AI-based DSS will help in minimizing the AI-based decision-making risk in a way that ensures fairness, accountability, and transparency. This study will be carried out with our partner hospital, St. James in Dublin.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要