Physical Adversarial Attack on a Robotic Arm

IEEE Robotics and Automation Letters(2022)

引用 0|浏览19
暂无评分
摘要
Collaborative Robots (cobots) are regarded as highly safety-critical cyber-physical systems (CPSs) owing to their close physical interactions with humans. In settings such as smart factories, they are frequently augmented with AI. For example, in order to move materials, cobots utilize object detectors based on deep learning models. Deep learning, however, has been demonstrated as vulnerable to adversarial attacks: a minor change (noise) to benign input can fool the underlying neural networks and lead to a different result. While existing works have explored such attacks in the context of picture/object classification, less attention has been given to attacking neural networks used for identifying object locations, and demonstrating that this can actually lead to a physical attack in a real CPS. In this paper, we propose a method to generate adversarial patches for the object detectors of CPSs, in order to miscalibrate them and cause potentially dangerous physical effects. In particular, we evaluate our method on an industrial robotic arm for card gripping, demonstrating that it can be misled into clipping the operator's hand instead of the card. To our knowledge, this is the first work to attack object locations and lead to an incident on human users by an actual system.
更多
查看译文
关键词
Adversarial patch attack,cyber-physical systems,object detection,physical attacks,YOLO
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要