An analysis of incorporating an external language model into a sequence-to-sequence model.

international conference on acoustics, speech, and signal processing(2018)

引用 242|浏览165
暂无评分
摘要
Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with an neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.
更多
查看译文
关键词
automatic speech recognition,alignment mechanism,transcribed audio-text pairs,log-linear interpolation,Google Voice Search,neural LM,word error rate reduction,competitive attention-based sequence-to-sequence model,language model component,acoustic model,external language model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要