谷歌浏览器插件
订阅小程序
在清言上使用

Evaluating the Human Safety Net: Observational Study of Physician Responses to Unsafe AI Recommendations in High-Fidelity Simulation

medrxiv(2023)

引用 0|浏览6
暂无评分
摘要
In the context of Artificial Intelligence (AI)-driven decision support systems for high-stakes environments, particularly in healthcare, ensuring the safety of human-AI interactions is paramount, given the potential risks associated with erroneous AI outputs. To address this, we conducted a prospective observational study involving 38 intensivists in a simulated medical setting. Physicians wore eye-tracking glasses and received AI-generated treatment recommendations, including unsafe ones. Most clinicians promptly rejected unsafe AI recommendations, with many seeking senior assistance. Intriguingly, physicians paid increased attention to unsafe AI recommendations, as indicated by eye-tracking data. However, they did not rely on traditional clinical sources for validation post-AI interaction, suggesting limited “debugging.” Our study emphasises the importance of human oversight in critical domains and highlights the value of eye-tracking in evaluating human-AI dynamics. Additionally, we observed human-human interactions, where an experimenter played the role of a bedside nurse, influencing a few physicians to accept unsafe AI recommendations. This underscores the complexity of trying to predict behavioural dynamics between humans and AI in high-stakes settings. ### Competing Interest Statement The authors have declared no competing interest. ### Funding Statement This work was funded by the University of York and the Lloyds Register Foundation through the Assuring Autonomy International Programme (Project Reference 03/19/07) and supported by the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre (BRC). PF and MN were supported by a PhD studentship of the UKRI Centre for Doctoral Training in AI for Healthcare (EP/S023283/1). ACG was supported by an NIHR Research Professorship (RP-2015-06-018). AAF was supported by a UKRI Turing AI Fellowship (EP/V025449/1). This study/project/report is independent research funded by the NIHR (Artificial Intelligence, 'Validation of a machine learning tool for optimal sepsis treatment', AI_AWARD01869). ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. Yes The details of the IRB/oversight body that provided approval or exemption for the research described are given below: The ethics committee/IRB of Imperial College London gave ethical approval for this work. I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals. Yes I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance). Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. Yes All data and code for this paper is available at:https://figshare.com/s/78c5ff5c6031f701c0d1
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要