Could artificial intelligence write mental health nursing care plans?

Journal of psychiatric and mental health nursing(2023)

引用 1|浏览1
暂无评分
摘要
WHAT IS KNOWN ON THE SUBJECT?: Artificial intelligence (AI) is freely available, responds to very basic text input (such as a question) and can now create a wide range of outputs, communicating in many languages or art forms. AI platforms like OpenAI's ChatGPT can now create passages of text that could be used to create plans of care for people with mental health needs. As such, AI output can be difficult to distinguish from human-output, and there is a risk that its use could go unnoticed. WHAT THIS PAPER ADDS TO EXISTING KNOWLEDGE?: Whilst it is known that AI can produce text or pass pre-registration health-profession exams, it is not known if AI can produce meaningful results for care delivery. We asked ChatGPT basic questions about a fictitious person who presents with self-harm and then evaluated the quality of the output. We found that the output could look reasonable to laypersons but there were significant errors and ethical issues. There are potential harms to people in care if AI is used without an expert correcting or removing these errors. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: We suggest that there is a risk that AI use could cause harm if it was used in direct care delivery. There is a lack of policy and research to safeguard people receiving care - and this needs to be in place before AI should be used in this way. Key aspects of the role of a mental health nurse are relational and AI use may diminish mental health nurses' ability to provide safe care in its current form. Many aspects of mental health recovery are linked to relationships and social engagement, however AI is not able to provide this and may push the people who are in most need of help further away from services that assist recovery. ABSTRACT: Background Artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care-delivery is not yet known. Aims To use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors' clinical experience and existing guidance. Materials & Methods Basic text commands were input into ChatGPT about a fictitious person called 'Emily' who presents with self-injurious behaviour. The output from ChatGPT was then evaluated against the authors' clinical experience and current (national) care guidance. Results ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way. Discussion AI use is increasing in direct-care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care-recipient engagement, 'recycle' existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms. Conclusion Use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT's ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
更多
查看译文
关键词
art of nursing, nursing role, quality of care, self-harm, therapeutic relationships
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要