AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
empirical methods in natural language processing, pp. 4222-4235, 2020.
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to...More
Full Text (Upload PDF)
PPT (Upload PPT)