MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models

2020 IEEE International Conference on Data Mining (ICDM)(2020)

引用 64|浏览285
暂无评分
摘要
In recent years, the proliferation of so-called “fake news” has caused much disruptions in society and weakened the news ecosystem. Therefore, to mitigate such problems, researchers have developed state-of-the-art (SOTA) models to autodetect fake news on social media using sophisticated data science and machine learning techniques. In this work, then, we ask “what if adversaries attempt to attack such detection models?” and investigate related issues by (i) proposing a novel attack scenario against fake news detectors, in which adversaries can post malicious comments toward news articles to mislead SOTA fake news detectors, and (ii) developing Malcom, an end-to-end adversarial comment generation framework to achieve such an attack. Through a comprehensive evaluation, we demonstrate that about 94% and 93.5% of the time on average Malcom can successfully mislead five of the latest neural detection models to always output targeted real and fake news labels. Furthermore, Malcom can also fool black box fake news detectors to always output real news labels 90% of the time on average. We also compare our attack model with four baselines across two real-world datasets, not only on attack performance but also on generated quality, coherency, transferability, and robustness. We release the source code of Malcom at https://github.com/lethaiq/MALCOM 1 .
更多
查看译文
关键词
Fake News,Adversarial,Attack,Malicious Comments,MALCOM,Misinformation,social media
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要