Fixing Mislabeling by Human Annotators Leveraging Conflict Resolution and Prior Knowledge

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies(2019)

引用 17|浏览54
暂无评分
摘要
According to the "human in the loop" paradigm, machine learning algorithms can improve when leveraging on human intelligence, usually in the form of labels or annotation from domain experts. However, in the case of research areas such as ubiquitous computing or lifelong learning, where the annotator is not an expert and is continuously asked for feedback, humans can provide significant fractions of incorrect labels. We propose to address this issue in a series of experiments where students are asked to provide information about their behavior via a dedicated mobile application. Their trustworthiness is tested by employing an architecture where the machine uses all its available knowledge to check the correctness of its own and the user labeling to build a uniform confidence measure for both of them to be used when a contradiction arises. The overarching system runs through a series of modes with progressively higher confidence and features a conflict resolution component to settle the inconsistencies. The results are very promising and show the pervasiveness of annotation mistakes, the extreme diversity of the users' behaviors which provides evidence of the impracticality of a uniform fits-it-all solution, and the substantially improved performance of a skeptical supervised learning strategy.
更多
查看译文
关键词
Annotation Errors, Collaborative and Social Computing, Ubiquitous and Mobile Devices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要