Feedforward sequential memory networks based encoder-decoder model for machine translation.

Asia-Pacific Signal and Information Processing Association Annual Summit and Conference(2017)

引用 24|浏览38
暂无评分
摘要
Recently recurrent neural networks based encoder-decoder model is a popular approach to sequence to sequence mapping problems, such as machine translation. However, it is time-consuming to train the model since symbols in a sequence can not be processed parallelly by recurrent neural networks because of the temporal dependency restriction. In this paper we present a sequence to sequence model by replacing the recurrent neural networks with feedforward sequential memory networks in both encoder and decoder, which enables the new architecture to encode the entire source sentence simultaneously. We also modify the attention module to make the decoder generate outputs simultaneously during training. We achieve comparable results in WMT'14 English-to-French translation task with 1.4 to 2 times faster during training because of temporal independency in feedforward sequential memory networks based encoder and decoder.
更多
查看译文
关键词
feedforward sequential memory networks,encoder-decoder model,machine translation,recurrent neural networks,sequence mapping problems,sequence model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要