谷歌浏览器插件
订阅小程序
在清言上使用

Systematic Literature Review: Evaluating Effects of Adversarial Attacks and Attack Generation Methods

2023 International Conference on Energy, Power, Environment, Control, and Computing (ICEPECC)(2023)

引用 0|浏览6
暂无评分
摘要
Advancement in Artificial Intelligence (AI) aims to train the Machine Learning (ML) Models in such a way that they would be able to take decisions spontaneously, however on the other side attackers attempt to manipulate the results generated by these models which makes the application of these models difficult in security-critical areas including classification of medical images, autonomous system installed in vehicles, street lights, malware detection and identification of a person as criminal or innocent. Advancement in research proved the vulnerability of these classifiers due to adversarial attacks that can alter their result in training as well as in testing phase of the model. Causative attacks are the training phase attacks whereas the attacks done at testing phase are the exploratory attacks. This systematic literature Review (SLR) is conducted to gain in-depth knowledge of adversarial attacks, which is the most effective type of exploratory attack, parameters on which these attacks are based, along with the most conventional methods of generating adversarial attacks.
更多
查看译文
关键词
Adversarial attacks,Attack Generation Methods,Fast Gradient Sign Method,Local Search,Transfer Based,Decision-Based Attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要