Accounting for the residual uncertainty of multi-layer perceptron based features.

ICASSP(2014)

引用 13|浏览28
暂无评分
摘要
Multi-Layer Perceptrons (MLPs) are often interpreted as modeling a posterior distribution over classes given input features using the mean field approximation. This approximation is fast but neglects the residual uncertainty of inference at each layer, making inference less robust. In this paper we introduce a new approximation of MLP inference that takes under consideration this residual uncertainty. The proposed algorithm propagates not only the mean, but also the variance of inference through the network. At the current stage, the proposed method can not be used with soft-max layers. Therefore, we illustrate the benefits of this algorithm in a tandem scheme. We use the residual uncertainty of inference of MLP-based features to compensate a GMM-HMM back-end with uncertainty decoding. Experiments on the Aurora4 corpus show consistent improvement of performance against conventional MLPs for all scenarios, in particular for clean speech and multi-style training.
更多
查看译文
关键词
uncertainty,learning artificial intelligence,posterior distribution,gaussian processes,hidden markov models,multi layer perceptron,speech,acoustics,mean field approximation,mixture models,speech recognition,mean field theory,tandem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要