Black-box attacks against log anomaly detection with adversarial examples

Information Sciences(2023)

引用 5|浏览44
Deep neural networks (DNNs) have been widely employed to solve log anomaly detection and outperform a range of conventional methods. They have attained such striking success because they can usually explore and extract semantic information from a large volume of log data, which helps to infer complex log anomaly patterns more accurately. Despite its success in generalization accuracy, this data-driven approach can still suffer from a high vulnerability to adversarial attacks, which severely limits its practical use. To address this issue, several studies have proposed anomaly detectors to equip neural networks to improve their robustness. These anomaly detectors are built based on effective adversarial attack methods. Therefore, effective adversarial attack approaches are important for devel-oping more efficient anomaly detectors, thereby improving neural network robustness. In this study, we propose two strong and effective black-box attackers, an attention-based and a gradient-based attacker, to defeat three target systems: MLP, AutoEncoder, and DeepLog.Our approach facilitates the generation of more effective adversarial examples with the help of the analysis of vulnerable logkeys. The proposed attention-based attacker leverages attention weights to achieve vulnerable logkeys and derive adversarial examples, which are implemented using our previously developed attention-based convolutional neural network model. The proposed gradient-based attacker calculates gradients based on potential vulnerable logkeys to seek an optimal adversarial sample. The experimental results showed that these two approaches significantly outperformed the state-of-the-art attacker model log anomaly mask (LAM). In particular, owing to its optimization, the proposed gradient-based attacker approach can significantly increase the misclassification rate on three target models, yields a 70% successful attack rate on DeepLog and greatly exceeds the baseline by 52%.(c) 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CCBY-NC-ND license (
Log analysis,Big data,Anomalous detection,Deep learning
AI 理解论文