Grapheme-to-phoneme conversion using Long Short-Term Memory recurrent neural networks

IEEE International Conference on Acoustics, Speech and SP(2015)

引用 270|浏览148
暂无评分
摘要
Grapheme-to-phoneme (G2P) models are key components in speech recognition and text-to-speech systems as they describe how words are pronounced. We propose a G2P model based on a Long Short-Term Memory (LSTM) recurrent neural network (RNN). In contrast to traditional joint-sequence based G2P approaches, LSTMs have the flexibility of taking into consideration the full context of graphemes and transform the problem from a series of grapheme-to-phoneme conversions to a word-to-pronunciation conversion. Training joint-sequence based G2P require explicit grapheme-to-phoneme alignments which are not straightforward since graphemes and phonemes don't correspond one-to-one. The LSTM based approach forgoes the need for such explicit alignments. We experiment with unidirectional LSTM (ULSTM) with different kinds of output delays and deep bidirectional LSTM (DBLSTM) with a connectionist temporal classification (CTC) layer. The DBLSTM-CTC model achieves a word error rate (WER) of 25.8% on the public CMU dataset for US English. Combining the DBLSTM-CTC model with a joint n-gram model results in a WER of 21.3%, which is a 9% relative improvement compared to the previous best WER of 23.4% from a hybrid system.
更多
查看译文
关键词
neural nets,speech recognition,speech synthesis,synchronisation,CTC layer,DBLSTM-CTC model,G2P models,RNN,ULSTM,US English,WER,connectionist temporal classification,deep bidirectional LSTM,grapheme-to-phoneme alignments,grapheme-to-phoneme conversion,grapheme-to-phoneme models,hybrid system,joint n-gram model,joint-sequence based G2P,long short-term memory recurrent neural networks,public CMU dataset,speech recognition,text-to-speech systems,unidirectional LSTM,word error rate,word-to-pronunciation conversion,CTC,G2P,LSTM,RNN,pronunciation,speech recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要