Uncertainty estimation based adversarial attack in multi-class classification

MULTIMEDIA TOOLS AND APPLICATIONS(2022)

引用 2|浏览11
暂无评分
摘要
Model uncertainty has gained popularity in machine learning due to the overconfident predictions derived from standard neural networks which are not trustworthy. Recently, Monte-Carlo based adversarial attack (MC-AA) has been proposed as a simple uncertainty estimation method which is powerful in capturing data points that lie in the overlapping distribution of the decision boundary. MC-AA produces uncertainties by performing back-and-forth perturbations of a given data point towards the decision boundary using the idea of adversarial attacks. Despite its efficacy against other uncertainty estimation methods, this method has been only examined on binary classification problems. Thus, we present and examine MC-AA with multi-class classification tasks. We point out the limitation of this method with multiple classes which we tackle by converting multiclass problem into ‘one-versus-all’ classification. We compare MC-AA against other recent model uncertainty methods on Cora – a graph structured dataset – and MNIST – an image dataset. Thus, the conducted experiments are performed using a variety of deep learning algorithms to perform the classification. Consequently, we discuss the best results of model uncertainty with Cora data using LEConv model of AUC-score 0.889 and MNIST data using CNN of AUC-score 0.98 against other uncertainty estimation methods.
更多
查看译文
关键词
Uncertainty estimation,Adversarial attack,Deep neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要