Generalising Kendall'S Tau For Noisy And Incomplete Preference Judgements

PROCEEDINGS OF THE 2019 ACM SIGIR INTERNATIONAL CONFERENCE ON THEORY OF INFORMATION RETRIEVAL (ICTIR'19)(2019)

引用 1|浏览50
暂无评分
摘要
We propose a new ranking evaluation measure for situations where multiple preference judgements are given for each item pair but they may be noisy (i.e., some judgements are unreliable) and/or incomplete (i.e., some judgements are missing). While it is generally easier for assessors to conduct preference judgements than absolute judgements, it is often not practical to obtain preference judgements for all combinations of documents. However, this problem can be overcome if we can effectively utilise noisy and incomplete preference judgements such as those that can be obtained from crowdsourcing. Our measure, eta, is based on a simple probabilistic user model of the labellers which assumes that each document is associated with a graded relevance score for a given query. We also consider situations where multiple preference probabilities, rather than preference labels, are given for each document pair. For example, in the absence of manual preference judgements, one might want to employ an ensemble of machine learning techniques to obtain such estimated probabilities. For this scenario, we propose another ranking evaluation measure called eta(p). Through simulated experiments, we demonstrate that our proposed measures eta and eta(p) can evaluate rankings more reliably than tau-b, a popular rank correlation measure.
更多
查看译文
关键词
crowdsourcing, evaluation measures, graded relevance, preference judgements
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要