Improving N-gram Language Models with Pre-trained Deep Transformer

Huang Hongzhao
Huang Hongzhao
Liu Zhe
Liu Zhe
Pang Yutong
Pang Yutong
Wang Yongqiang
Wang Yongqiang
Peng Fuchun
Peng Fuchun
Cited by: 0|Bibtex|Views2
Other Links: arxiv.org

Abstract:

Although n-gram language models (LMs) have been outperformed by the state-of-the-art neural LMs, they are still widely used in speech recognition due to its high efficiency in inference. In this paper, we demonstrate that n-gram LM can be improved by neural LMs through a text generation based data augmentation method. In contrast to pre...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments