Bagging Classifiers For Fighting Poisoning Attacks In Adversarial Classification Tasks

MCS'11: Proceedings of the 10th international conference on Multiple classifier systems(2011)

引用 78|浏览56
暂无评分
摘要
Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by "poisoning" its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.
更多
查看译文
关键词
Training Data, Intrusion Detection, Intrusion Detection System, Ensemble Size, Kernel Density Estimator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要