Analysis Of Causative Attack Over MNIST Dataset Using Convolutional Neural Network

2023 IEEE World AI IoT Congress (AIIoT)(2023)

引用 0|浏览0
暂无评分
摘要
Machine learning security refers to ensure the robustness of the algorithm used in different classifier methods. If a training classifier is trained in a wrong manner, it will produce unexpected classifications and predict incorrect labels. Poisoning through input to the classifier will make the attacker able to reach his/her goal of exploiting the prediction. Changing input data introduces a causative attack in machine learning. In this work, we are focusing on the MNIST dataset; a handwritten digit dataset available online, to analyze the impact of causative attacks. We have implemented a Convolutional Neural Network (CNN) classifier to evaluate adversarial samples created by a Generative Adversarial Network (GAN) model. We are using GAN to create samples using a d-dimensional noise vector and multiple hidden layers. Finally, our implemented GAN has fooled the original CNN classifier by misclassifying digit labels found in the MNIST dataset. We have used Tensorflow for the implementation in Python language and presented the results found. We have evaluated the accuracy of the sample image with visual plotting to test our attack model success as well.
更多
查看译文
关键词
machine learning,convolutional neural network,generative adversarial network (GAN),MNIST,causative attack
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要