RMITB at TREC COVID 2020

arxiv(2020)

引用 0|浏览59
暂无评分
摘要
Search engine users rarely express an information need using the same query, and small differences in queries can lead to very different result sets. These user query variations have been exploited in past TREC CORE tracks to contribute diverse, highly-effective runs in offline evaluation campaigns with the goal of producing reusable test collections. In this paper, we document the query fusion runs submitted to the first and second round of TREC COVID, using ten queries per topic created by the first author. In our analysis, we focus primarily on the effects of having our second priority run omitted from the judgment pool. This run is of particular interest, as it surfaced a number of relevant documents that were not judged until later rounds of the task. If the additional judgments were included in the first round, the performance of this run increased by 35 rank positions when using RBP p=0.5, highlighting the importance of judgment depth and coverage in assessment tasks.
更多
查看译文
关键词
trec covid
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要