谷歌浏览器插件
订阅小程序
在清言上使用

An Investigation of Evaluation Methods in Automatic Medical Note Generation

ACL (Findings)(2023)

引用 1|浏览48
暂无评分
摘要
Recent studies on automatic note generation have shown that doctors can save significant amounts of time when using automatic clinical note generation (Knoll et al., 2022).Summarization models have been used for this task to generate clinical notes as summaries of doctorpatient conversations (Krishna et al., 2021;Cai et al., 2022).However, assessing which model would best serve clinicians in their daily practice is still a challenging task due to the large set of possible correct summaries, and the potential limitations of automatic evaluation metrics.In this paper, we study evaluation methods and metrics for the automatic generation of clinical notes from medical conversations.In particular, we propose new task-specific metrics and we compare them to SOTA evaluation metrics in text summarization and generation, including: (i) knowledge-graph embeddingbased metrics, (ii) customized model-based metrics, (iii) domain-adapted/fine-tuned metrics, and (iv) ensemble metrics.To study the correlation between the automatic metrics and manual judgments, we evaluate automatic notes/summaries by comparing the system and reference facts and computing the factual correctness, and the hallucination and omission rates for critical medical facts.This study relied on seven datasets manually annotated by domain experts.Our experiments show that automatic evaluation metrics can have substantially different behaviors on different types of clinical notes datasets.However, the results highlight one stable subset of metrics as the most correlated with human judgments with a relevant aggregation of different evaluation criteria.
更多
查看译文
关键词
Clinical Decision Support Systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要