Explained anomaly detection in text reviews: Can subjective scenarios be correctly evaluated?

ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE(2024)

引用 0|浏览1
暂无评分
摘要
In the current landscape, user opinions exert an unprecedented influence on the trajectory of companies. In the field of online review platforms, these opinions, transmitted through text reviews and numerical ratings, significantly shape the credibility of products and services. For this reason, detecting inappropriate reviews becomes crucial. This paper addresses the problem of automatic anomalous review detection using a novel approach based on Anomaly Detection in the field of Natural Language Processing (NLP). Unlike other NLP tasks, anomaly detection in texts is a relatively emerging area. In this paper, we present a pipeline for opinion filtering that poses the problem of discerning between normal opinions containing relevant information about an item and anomalous opinions with unrelated content. Its key functionalities include: Classifying the reviews, assigning normality scores, and generating explanations for each classification, indispensable for the human who normally moderates these platforms. To evaluate the model, several Amazon datasets were used to demonstrate that the performance obtained is robust, obtaining an average F1 score of 91.4 detecting anomalies in the most complex scenario. In addition, a comparative study of three explainability techniques was conducted with 241 participants to measure the impact on understanding the classifications of the model and to rank their perceived usefulness of explanations. As a result, we obtained a system with great potential to automate tasks related to online review platforms, offering insights into anomaly detection applications in textual data and showing the difficulties that arise when the task to be explained presents a subjectivity component.
更多
查看译文
关键词
Anomaly detection,Text reviews,Transformers,Explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要