The Effects of Autoencoders on the Robustness of Deep Learning Models

2022 30th Signal Processing and Communications Applications Conference (SIU)(2022)

引用 0|浏览3
暂无评分
摘要
Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems . At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.
更多
查看译文
关键词
adversarial attack,intrusion detection systems,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要