Learning to rank from a noisy crowd.
SIGIR '11: The 34th International ACM SIGIR conference on research and development in Information Retrieval Beijing China July, 2011(2011)
摘要
We study how to best use crowdsourced relevance judgments learning to rank [1, 7]. We integrate two lines of prior work: unreliable crowd-based binary annotation for binary classification [5, 3], and aggregating graded relevance judgments from reliable experts for ranking [7]. To model varying performance of the crowd, we simulate annotation noise with varying magnitude and distributional properties. Evaluation on three LETOR test collections reveals a striking trend contrary to prior studies: single labeling outperforms consensus methods in maximizing learner accuracy relative to annotator eýort. We also see surprising consistency of the learning curve across noise distributions, as well as greater challenge with the adversarial case for multi-class labeling.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络