Left-right Discrepancy for Adversarial Attack on Stereo Networks
CoRR(2024)
摘要
Stereo matching neural networks often involve a Siamese structure to extract
intermediate features from left and right images. The similarity between these
intermediate left-right features significantly impacts the accuracy of
disparity estimation. In this paper, we introduce a novel adversarial attack
approach that generates perturbation noise specifically designed to maximize
the discrepancy between left and right image features. Extensive experiments
demonstrate the superior capability of our method to induce larger prediction
errors in stereo neural networks, e.g. outperforming existing state-of-the-art
attack methods by 219
dataset. Additionally, we extend our approach to include a proxy network
black-box attack method, eliminating the need for access to stereo neural
network. This method leverages an arbitrary network from a different vision
task as a proxy to generate adversarial noise, effectively causing the stereo
network to produce erroneous predictions. Our findings highlight a notable
sensitivity of stereo networks to discrepancies in shallow layer features,
offering valuable insights that could guide future research in enhancing the
robustness of stereo vision systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要