谷歌浏览器插件
订阅小程序
在清言上使用

Lambertian-based Adversarial Attacks on Deep-Learning-based Underwater Side-Scan Sonar Image Classification

Pattern recognition(2023)

引用 0|浏览51
暂无评分
摘要
Deep convolutional neural networks (CNNs) are extensively applied to the classification tasks for Side -scan sonar (SSS) images. However, state-of-the-art neural networks are prone to be confused by adver-sarial attacks that generate a tiny modification of the images, threatening the security of SSS classifica-tion. The robustness of CNN to adversarial attacks can be improved by introducing adversarial examples through adversarial training. Practical adversarial examples are often generated from elaborate adversarial attackers. For the underwater scenario of sonar, a specially designed adversarial attack method to weaken SSS image classification can make the research community better understand the weakness of CNN in this scenario and improve the security measures in a well-directed way. Thus, exploring adversarial attack methods for SSS image classification is essential. Nevertheless, the existing adversarial attack methods are designed for optical images, reflecting no physical characteristics of sonar images. To fill this gap and investigate the adversarial attack related to real-world conditions, in this paper, we propose an adversar-ial attack method named Lambertian Adversarial Sonar Attack (LASA). It initially leverages the Lambertian model to simulate the formation of the SSS image, factoring the image to three parameters, then updates the parameters on the direction of gradients by the chain rule. Finally, the parameters regenerate the adversarial example to fool the classifier. To validate the performance of LASA, we constructed a diver-sified SSS image dataset containing three categories. On our dataset, LASA reduces the Top-1 accuracy of a well-trained ResNet-101 to 7 . 31% +/- 0 . 21 (one-shot version) and 0.00% (iterative version), the success rate of targeted attack reaches 97 . 03 +/- 2 . 24 , far beyond the performance of the existing state-of-the-art adversarial attack methods. Meanwhile, we show that the adversarial training using examples generated from LASA makes the classifier more robust. We expect that our methods can be applied as a bench-mark of adversarial attacks on SSS images, motivating future research to design novel neural networks or defensive methods to resist real-world adversarial attacks on SSS images. (c) 2023 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Adversarial attack,Classification,Side-scan sonar,Lambertian model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要