Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
ICCV Workshops, pp. 751-759, 2017.
Recent work has shown how deep neural networks can be fooled by well-crafted adversarial examples affected by a barely-perceivable adversarial noise
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to ev...More
PPT (Upload PPT)