Explainable Agreement through Simulation for Tasks with Subjective Labels.

arXiv: Information Retrieval(2018)

引用 23|浏览22
暂无评分
摘要
The field of information retrieval often works with limited and noisy data in an attempt to classify documents into subjective categories, e.g., relevance, sentiment and controversy. We typically quantify a notion of agreement to understand the difficulty of the labeling task, but when we present final results, we do so using measures that are unaware of agreement or the inherent subjectivity of the task. We propose using user simulation to understand the effect size of this noisy agreement data. By simulating truth and predictions, we can understand the maximum scores a dataset can support: for if a classifier is doing better than a reasonable model of a human, we cannot conclude that it is actually better, but that it may be learning noise present in the dataset. We present a brief case study on controversy detection that concludes that a commonly-used dataset has been exhausted: in order to advance the state-of-the-art, more data must be gathered at the current level of label agreement in order to distinguish between techniques with confidence.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要