AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

Taylor Shin
Taylor Shin
Yasaman Razeghi
Yasaman Razeghi
Robert L. Logan IV
Robert L. Logan IV

empirical methods in natural language processing, pp. 4222-4235, 2020.

Other Links: arxiv.org|academic.microsoft.com

Abstract:

The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to...More

Code:

Data:

Your rating :
0

 

Tags
Comments