Measuring and Improving Consistency in Pretrained Language Models

Yanai Elazar
Yanai Elazar
Nora Kassner
Nora Kassner
Shauli Ravfogel
Shauli Ravfogel
Abhilasha Ravichander
Abhilasha Ravichander
Hinrich Schütze
Hinrich Schütze
Cited by: 0|Views4

Abstract:

Consistency of a model -- that is, the invariance of its behavior under meaning-preserving alternations in its input -- is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel, a ...More

Code:

Data:

Full Text
Bibtex
Your rating :
0

 

Tags
Comments