Evading Machine Learning Botnet Detection Models via Deep Reinforcement Learning

IEEE International Conference on Communications(2019)

引用 45|浏览79
暂无评分
摘要
Botnets are one of predominant threats to Internet security. To date, machine learning technology has wide application in botnet detection because that it is able to summarize the features of existing attacks and generalize to never-before-seen botnet families. However, recent works in adversarial machine learning have shown that attackers are able to bypass the detection model by constructing specific samples, which due to many algorithms are vulnerable to almost imperceptible perturbations of their inputs. According to the degree of adversaries' knowledge about the model, adversarial attacks can be classified into several groups, such as gradient- and score-based attacks. In this paper, we propose a more general framework based on deep reinforcement learning (DRL), which effectively generates adversarial traffic flows to deceive the detection model by automatically adding perturbations to samples. Throughout the process, the target detector will be regarded as a black box and more close to realistic attack circumstance. A reinforcement learning agent is equipped for updating the adversarial samples by combining the feedback from the target model (i.e. benign or malicious) and the sequence of actions, which is able to change the temporal and spatial features of the traffic flows while maintaining the original functionality and executability. The experiment results show that the evasion rates of adversarial botnet flows are significantly improved. Furthermore, with the perspective of defense, this research can help the detection model spot its defect and thus enhance the robustness.
更多
查看译文
关键词
botnet,adversarial,reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要