Learning Noise-Invariant Representations for Robust Speech Recognition.

2018 IEEE Spoken Language Technology Workshop (SLT)(2018)

引用 43|浏览50
暂无评分
摘要
Despite rapid advances in speech recognition, current models remain brittle to superficial perturbations to their inputs. Small amounts of noise can destroy the performance of an otherwise state-of-the-art model. To harden models against background noise, practitioners often perform data augmentation, adding artificially-noised examples to the training set, carrying over the original label. In this paper, we hypothesize that a clean example and its superficially perturbed counterparts shouldn’t merely map to the same class — they should map to the same representation. We propose invariant-representation-learning (IRL): At each training iteration, for each training example, we sample a noisy counterpart. We then apply a penalty term to coerce matched representations at each layer (above some chosen layer). Our key results, demonstrated on the LibriSpeech dataset are the following: (i) IRL significantly reduces character error rates (CER) on both ‘clean’ (3.3% vs 6.5%) and ‘other’ (11.0% vs 18.1%) test sets; (ii) on several-of-domain noise settings (different from those seen during training) IRL’s benefits are even more pronounced. Careful ablations confirm that our results are not simply due to shrinking activations at the chosen layers.
更多
查看译文
关键词
Noise measurement,Training,Data models,Speech recognition,Decoding,Error analysis,Recurrent neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要