An Adversarial Approach to Evaluating the Robustness of Event Identification Models
CoRR(2024)
摘要
Intelligent machine learning approaches are finding active use for event
detection and identification that allow real-time situational awareness. Yet,
such machine learning algorithms have been shown to be susceptible to
adversarial attacks on the incoming telemetry data. This paper considers a
physics-based modal decomposition method to extract features for event
classification and focuses on interpretable classifiers including logistic
regression and gradient boosting to distinguish two types of events: load loss
and generation loss. The resulting classifiers are then tested against an
adversarial algorithm to evaluate their robustness. The adversarial attack is
tested in two settings: the white box setting, wherein the attacker knows
exactly the classification model; and the gray box setting, wherein the
attacker has access to historical data from the same network as was used to
train the classifier, but does not know the classification model. Thorough
experiments on the synthetic South Carolina 500-bus system highlight that a
relatively simpler model such as logistic regression is more susceptible to
adversarial attacks than gradient boosting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要