Deep Neural Rejection against Adversarial Examples

Sotgiu Angelo
Sotgiu Angelo
Demontis Ambra
Demontis Ambra
Feng Xiaoyi
Feng Xiaoyi

EURASIP Journal on Information Security, pp. 1-10, 2019.

Cited by: 0|Views40

Abstract:

Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, ...More

Code:

Data:

Your rating :
0

 

Tags
Comments