Non-autoregressive Neural Machine Translation with Distortion Model.

international conference natural language processing(2020)

引用 1|浏览58
暂无评分
摘要
Non-autoregressive translation (NAT) has attracted attention recently due to its high efficiency during inference. Unfortunately, it performs significantly worse than the autoregressive translation (AT) model. We observe that the gap between NAT and AT can be remarkably narrowed if we provide the inputs of the decoder in the same order as the target sentence. However, existing NAT models still initialize the decoding process by copying source inputs from left to right, and lack an explicit reordering mechanism for decoder inputs. To address this problem, we propose a novel distortion model to enhance the decoder inputs so as to further improve NAT models. The distortion model, incorporated into the NAT model, reorders the decoder inputs to close the word order of the decoder outputs, which can reduce the search space of the non-autoregressive decoder. We verify our approach empirically through a series of experiments on three similar language pairs (En\\(\\Rightarrow \\)De, En\\(\\Rightarrow \\)Ro, and De\\(\\Rightarrow \\)En) and two dissimilar language pairs (Zh\\(\\Rightarrow \\)En and En\\(\\Rightarrow \\)Ja). Quantitative and qualitative analyses demonstrate the effectiveness and universality of our proposed approach.
更多
查看译文
关键词
distortion model,translation,neural,non-autoregressive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要