Multimodal Medical Image Retrieval Ohsu At Imageclef 2008

CLEF'08: Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access(2009)

引用 7|浏览19
暂无评分
摘要
We present results from the Oregon Health & Science University's participation in the medical retrieval task of ImageCLEF 2008. Our web-based retrieval system was built using a Ruby on Rails frame, work. Ferret, a Ruby port of Lucene was used to create the full-text based index and search engine. In addition to the textual index of annotations, supervised machine learning techniques using visual features were used to classify the images based on image acquisition modality. Our system provides the user with a number of search options including the ability to limit their search by modality, UMLS-based query expansion, and Natural Language Processing-based techniques. Purely textual runs as well as mixed runs using the purported modality were submitted. We also submitted interactive runs using user specified search options. Although the use of the UMLS metathesaurus increased our recall, our system is geared towards early precision. Consequently, many of our multimodal automatic runs using the custom parser as, well as interactive runs had high early precision including the highest P10 and P30 among the official runs. Our runs also performed well using the bpref metric, a measure that is more robust in the case of incomplete judgments.
更多
查看译文
关键词
image retrieval,imageclef
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要