Self-Modifying State Modeling for Simultaneous Machine Translation

arxiv(2024)

Cited 0|Views13
No score
Abstract
Simultaneous Machine Translation (SiMT) generates target outputs while receiving stream source inputs and requires a read/write policy to decide whether to wait for the next source token or generate a new target token, whose decisions form a decision path. Existing SiMT methods, which learn the policy by exploring various decision paths in training, face inherent limitations. These methods not only fail to precisely optimize the policy due to the inability to accurately assess the individual impact of each decision on SiMT performance, but also cannot sufficiently explore all potential paths because of their vast number. Besides, building decision paths requires unidirectional encoders to simulate streaming source inputs, which impairs the translation quality of SiMT models. To solve these issues, we propose Self-Modifying State Modeling (SM^2), a novel training paradigm for SiMT task. Without building decision paths, SM^2 individually optimizes decisions at each state during training. To precisely optimize the policy, SM^2 introduces Self-Modifying process to independently assess and adjust decisions at each state. For sufficient exploration, SM^2 proposes Prefix Sampling to efficiently traverse all potential states. Moreover, SM^2 ensures compatibility with bidirectional encoders, thus achieving higher translation quality. Experiments show that SM^2 outperforms strong baselines. Furthermore, SM^2 allows offline machine translation models to acquire SiMT ability with fine-tuning.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined