Do Human Rationales Improve Machine Explanations?

Julia Strout
Julia Strout
Ye Zhang
Ye Zhang

BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019, pp. 56-62, 2019.

Cited by: 2|Bibtex|Views35|DOI:https://doi.org/10.18653/v1/w19-4807
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy. However, this work has not been connected to work in "explainable AI" which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can a...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments