Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns
CoRR(2024)
摘要
Recent advancements in Computer Assisted Diagnosis have shown promising
performance in medical imaging tasks, particularly in chest X-ray analysis.
However, the interaction between these models and radiologists has been
primarily limited to input images. This work proposes a novel approach to
enhance human-computer interaction in chest X-ray analysis using
Vision-Language Models (VLMs) enhanced with radiologists' attention by
incorporating eye gaze data alongside textual prompts. Our approach leverages
heatmaps generated from eye gaze data, overlaying them onto medical images to
highlight areas of intense radiologist's focus during chest X-ray evaluation.
We evaluate this methodology in tasks such as visual question answering, chest
X-ray report automation, error detection, and differential diagnosis. Our
results demonstrate the inclusion of eye gaze information significantly
enhances the accuracy of chest X-ray analysis. Also, the impact of eye gaze on
fine-tuning was confirmed as it outperformed other medical VLMs in all tasks
except visual question answering. This work marks the potential of leveraging
both the VLM's capabilities and the radiologist's domain knowledge to improve
the capabilities of AI models in medical imaging, paving a novel way for
Computer Assisted Diagnosis with a human-centred AI.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要