谷歌浏览器插件
订阅小程序
在清言上使用

Adversarial Computer Vision via Acoustic Manipulation of Camera Sensors.

IEEE Trans. Dependable Secur. Comput.(2024)

引用 0|浏览12
暂无评分
摘要
Autonomous vehicles increasingly rely on camera-based computer vision systems to perceive environments and make critical driving decisions. To improve image quality, image stabilizers with inertial sensors are added to reduce image blurring caused by camera jitters. However, this trend creates a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of emerging image stabilizer hardware susceptible to acoustic manipulation and computer vision algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even when the camera is stable. These blurred images can induce object misclassification, affecting safety-critical decision-making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks: hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against five object detectors (YOLO V3/V4/V5, Faster R-CNN, and Apollo) and two lane detectors (UFLD and LaneAF). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.
更多
查看译文
关键词
Intelligent vehicle security,computer vision,adversarial machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要