Comparing the Quality of ChatGPT- and Physician-Generated Responses to Patients' Dermatologic Questions in the Electronic Medical Record.
Clinical and experimental dermatology(2024)
摘要
BACKGROUND:Chat Generated Pre-trained Transformer ('ChatGPT,' Open AI, San Francisco, USA) is a free artificial intelligence (AI)-based natural language processing tool that generates complex responses to inputs from users.
OBJECTIVE:To determine whether ChatGPT is able to generate high-quality responses to patient-submitted questions in the patient portal.
METHODS:Patient-submitted questions and their corresponding responses from their dermatology physician were extracted from the electronic medical record for analysis. The questions were input into ChatGPT (version 3.5), and the outputs were extracted for analysis, with manual removal of verbiage pertaining to ChatGPT's inability to provide medical advice. Ten blinded reviewers (n=7 physicians, n=3 non-physicians) rated and selected their preference in terms of 'overall quality,' 'readability,' 'accuracy,' 'thoroughness,' and 'level of empathy,' of the physician- and ChatGPT-generated responses.
RESULTS:Thirty-one messages and responses were analyzed. The physician-generated response was vastly preferred over the ChatGPT response by both physician and non-physician reviewers and received significantly higher ratings for 'readability' and 'level of empathy.'
CONCLUSIONS:The results of this study suggest that physician-generated responses to patients' portal messages are still preferred over ChatGPT, but generative AI tools may still be helpful in generating first drafts of responses and education resources for patients.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要