Breaking Boundaries: Balancing Performance and Robustness in Deep Wireless Traffic Forecasting

PROCEEDINGS OF THE 2023 WORKSHOP ON RECENT ADVANCES IN RESILIENT AND TRUSTWORTHY ML SYSTEMS IN AUTONOMOUS NETWORKS, ARTMAN 2023(2023)

引用 0|浏览10
暂无评分
摘要
Balancing the trade-off between accuracy and robustness is a longstanding challenge in time series forecasting. While most of existing robust algorithms have achieved certain suboptimal performance on clean data, sustaining the same performance level in the presence of data perturbations remains extremely hard. In this paper, we study a wide array of perturbation scenarios and propose novel defense mechanisms against adversarial attacks using real-world telecom data. We compare our strategy against two existing adversarial training algorithms under a range of maximal allowed perturbations, defined using l(infinity)-norm, is an element of [0.1, 0.4]. Our findings reveal that our hybrid strategy, which is composed of a classifier to detect adversarial examples, a denoiser to eliminate noise from the perturbed data samples, and a standard forecaster, achieves the best performance on both clean and perturbed data. Our optimal model can retain up to 92.02% the performance of the original forecasting model in terms of Mean Squared Error (MSE) on clean data, while being more robust than the standard adversarially trained models on perturbed data. Its MSE is 2.71x and 2.51x lower than those of comparing methods on normal and perturbed data, respectively. In addition, the components of our models can be trained in parallel, resulting in better computational efficiency. Our results indicate that we can optimally balance the trade-off between the performance and robustness of forecasting models by improving the classifier and denoiser, even in the presence of sophisticated and destructive poisoning attacks.
更多
查看译文
关键词
Forecasting,Poisoning,Classification,Denoising,Components,Robustness,Performance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要