Reflective Decoding: Unsupervised Paraphrasing and Abductive Reasoning
Abstract:
Pretrained Language Models (LMs) generate text with remarkable quality, novelty,and coherence. Yet applying LMs to the problems of paraphrasing and infilling currently requires direct supervision, since these tasks break the left-to-right generation setup of pretrained LMs. We present Reflective Decoding, a novel unsupervised approach t...More
Code:
Data:
Full Text
Tags
Comments