Democratizing scientific and healthcare communication with large language models

Cancer Research, Statistics, and Treatment(2023)

引用 1|浏览3
暂无评分
摘要
In their study, Parikh et al. aimed to identify differential attitudes toward ChatGPT among healthcare professionals and those who worked in other fields.[1] Overall, compared to their non-healthcare counterparts, a smaller proportion of healthcare professionals had used ChatGPT, and a greater proportion felt ChatGPT would, “revolutionize their world” by 50% or less. On the other hand, healthcare professionals were less fearful of the potentially catastrophic consequences of generative artificial intelligence (AI). These preliminary data suggest that the healthcare field is not fully aware of, is cautiously optimistic about, and underestimates the potential of this transformative technology.[2] The limitations of this study—including non-representative sampling and somewhat ambiguous questions—are important to consider and well-highlighted in an editorial by Pearce and Roop.[3] Still, it is commendable that the authors quickly generated a first impression of healthcare perceptions surrounding ChatGPT. Whether or not healthcare professionals know it, they have been using AI long before the release of ChatGPT—to autocomplete emails, unlock smartphones, and even receive movie recommendations.[4] Most likely, they will need to reckon with generative AI in some capacity, including uses that may be less apparent. One area where generative AI can have a near-immediate impact is in the creation and communication of scientific content. This is illustrated by the original paper’s use of ChatGPT to perform the statistical analysis. We know that GPT-series models can generate code to run scientific analyses from narrative prompts. Large language models, including ChatGPT, have the potential to analyze data, assist with writing, and review scientific literature, if correctly prompted. The ability of language models to produce human-appearing writing is now well known.[5,6] Through these attributes, generative AI could help level the academic playing field by allowing individual investigators who lack the resources of an established research group to more quickly generate and disseminate findings to a broad audience. While generative AI will add complexity to the regulation and ethics of scientific discovery, it has an opportunity to make this work more democratic. Despite the rapid advancements in generative AI (e.g., since the submission of the original article, ChatGPT Plus with access to GPT-4 is now available), there are concerns with how these technologies are being used. Several publishing groups have established policies limiting the use of generative AI in manuscripts, ranging from limited use with attribution to outright bans. Model “hallucination,” which occurs when a model has inadequate data or training to generate an accurate response, is currently a barrier to the use of large language models in healthcare;[7] but may be significantly improved when ChatGPT can cross-reference its responses with the internet.[3] While caution is warranted, overly conservative policies may slow the pace of adoption in fields such as healthcare and research. The authors’ findings suggest that healthcare workers may be more receptive than others to seeing AI as a force for good. Our field would benefit from a robust dialogue around responsible regulation of generative AI without stifling the necessary innovations that benefit our patients and profession. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
更多
查看译文
关键词
healthcare communication,large language models,language models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要