Multilingual acoustic models using distributed deep neural networks

Acoustics, Speech and Signal Processing(2013)

引用 385|浏览241
暂无评分
摘要
Today's speech recognition technology is mature enough to be useful for many practical applications. In this context, it is of paramount importance to train accurate acoustic models for many languages within given resource constraints such as data, processing power, and time. Multilingual training has the potential to solve the data issue and close the performance gap between resource-rich and resource-scarce languages. Neural networks lend themselves naturally to parameter sharing across languages, and distributed implementations have made it feasible to train large networks. In this paper, we present experimental results for cross- and multi-lingual network training of eleven Romance languages on 10k hours of data in total. The average relative gains over the monolingual baselines are 4%/2% (data-scarce/data-rich languages) for cross- and 7%/2% for multi-lingual training. However, the additional gain from jointly training the languages on all data comes at an increased training time of roughly four weeks, compared to two weeks (monolingual) and one week (crosslingual).
更多
查看译文
关键词
languages,neural nets,speech recognition,Romance languages,distributed deep neural networks,multilingual acoustic models,multilingual network training,processing power,resource-scarce languages,speech recognition technology,train large networks,Speech recognition,deep neural networks,distributed neural networks,multilingual training,parameter sharing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要