Automatic Fact Checking Using an Interpretable Bert-Based Architecture on COVID-19 Claims

Applied Sciences(2022)

引用 1|浏览3
暂无评分
摘要
We present a neural network architecture focused on verifying facts against evidence found in a knowledge base. The architecture can perform relevance evaluation and claim verification, parts of a well-known three-stage method of fact-checking. We fine-tuned BERT to codify claims and pieces of evidence separately. An attention layer between the claim and evidence representation computes alignment scores to identify relevant terms between both. Finally, a classification layer receives the vector representation of claims and evidence and performs the relevance and verification classification. Our model allows a more straightforward interpretation of the predictions than other state-of-the-art models. We use the scores computed within the attention layer to show which evidence spans are more relevant to classify a claim as supported or refuted. Our classification models achieve results compared to the state-of-the-art models in terms of classification of relevance evaluation and claim verification accuracy on the FEVER dataset.
更多
查看译文
关键词
fact checking, deep learning, attention, BERT, interpretable model, COVID-19
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要