Learning FOFE based FNN-LMs with noise contrastive estimation and part-of-speech features

2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP)(2016)

引用 0|浏览20
暂无评分
摘要
A simple but powerful language model called fixed-size ordinally-forgetting encoding (FOFE) based feedforward neural network language models (FNN-LMs) has been proposed recently. Experimental results have shown that FOFE based FNN-LMs can outperform not only the standard FNN-LMs but also the popular recurrent neural network language models (RNN-LMs). In this paper, we extend FOFE based FNN-LMs from several aspects. Firstly, we have proposed a new method to further improve the performance of FOFE based FNN-LMs by adding transitions of part-of-speech (POS) tags as additional features. Secondly, we have investigated how to speedup the learning of FOFE based FNN-LMs by using noise contrastive estimation (NCE). As a result, we can dramatically speedup the learning of FOFE based FNN-LMs while we still achieve very competitive experimental results on Large Text Compression Benchmark (LTCB).
更多
查看译文
关键词
language model,fixed-size ordinally-forgetting encoding,noise contrastive estimation,transitions of part-of-speech tags
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要