Chrome Extension
WeChat Mini Program
Use on ChatGLM

Improving Non-autoregressive Machine Translation with Soft-Masking.

NLPCC(2021)

Cited 0|Views3
No score
Abstract
In recent years, non-autoregressive machine translation has achieved great success due to its promising inference speedup. Non-autoregressive machine translation reduces the decoding latency by generating the target words in single-pass. However, there is a considerable gap in the accuracy between non-autoregressive machine translation and autoregressive machine translation. Because it removes the dependencies between the target words, non-autoregressive machine translation tends to generate repetitive words or wrong words, and these repetitive or wrong words lead to low performance. In this paper, we introduce a soft-masking method to alleviate this issue. Specifically, we introduce an autoregressive discriminator, which will output the probabilities hinting which embeddings are correct. Then according to the probabilities, we add mask on the copied representations, which enables the model to consider which words are easy to be predicted. We evaluated our method on three benchmarks, including WMT14 EN → DE, WMT16 EN → RO, and IWSLT14 DE → EN. The experimental results demonstrate that our method can outperform the baseline by a large margin with a bit of speed sacrifice.
More
Translated text
Key words
Non-autoregressive, Machine translation, Soft-masking
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined