The (Un)reliability of saliency methods

Pieter-Jan Kindermans
Pieter-Jan Kindermans
Sara Hooker
Sara Hooker
Maximilian Alber
Maximilian Alber

arXiv: Machine Learning, Volume abs/1711.00867, 2018, Pages 267-280.

Cited by: 131|Bibtex|Views152|DOI:https://doi.org/10.1007/978-3-030-28954-6_14
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the ...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments