Neural Network Based Prediction of Terrorist Attacks Using Explainable Artificial Intelligence

2023 IEEE Conference on Artificial Intelligence (CAI)(2023)

引用 0|浏览12
暂无评分
摘要
Al has transformed the field of terrorism prediction, allowing law enforcement agencies to identify potential threats much more quickly and accurately. This paper proposes a first-time application of a neural network to predict the "success" of a terrorist attack. The neural network attains an accuracy of 91.66% and an F1 score of 0.954. This accuracy and F1 score are higher than those achieved with alternative benchmark models. However, using Al for predictions in highstakes decisions also has limitations, including possible biases and ethical concerns. Therefore, the explainable Al (XAI) tool LIME is used to provide more insights into the algorithm's inner workings.
更多
查看译文
关键词
Explainable Al,terrorism prediction,Global Terrorism Database (GTD),LIME,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要