Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

ICCV Workshops, pp. 751-759, 2017.

Cited by: 39|Views30
EI
Weibo:
Recent work has shown how deep neural networks can be fooled by well-crafted adversarial examples affected by a barely-perceivable adversarial noise

Abstract:

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to ev...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments