Tackling unseen acoustic conditions in query-by-example search using time and frequency convolution for multilingual deep bottleneck features

2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2017)

引用 4|浏览93
暂无评分
摘要
Standard keyword spotting based on Automatic Speech Recognition (ASR) cannot be used on low-and no-resource languages due to lack of annotated data and/or linguistic resources. In recent years, query-by-example (QbE) has emerged as an alternate way to enroll and find spoken queries in large audio corpora, yet mismatched and unseen acoustic conditions remain a difficult challenge given the lack of enrollment data. This paper revisits two neural network architectures developed for noise and channel-robust ASR, and applies them to building a state-of-art multilingual QbE system. By applying convolution in time or frequency across the spectrum, those convolutional bottlenecks learn more discriminative deep bottleneck features. In conjunction with dynamic time warping (DTW), these features enable robust QbE systems. We use the MediaEval 2014 QUESST data to evaluate robustness against language and channel mismatches, and add several levels of artificial noise to the data to evaluate performance in degraded acoustic environments. We also assess performance on an Air Traffic Control QbE task with more realistic and higher levels of distortion in the push-to-talk domain.
更多
查看译文
关键词
query-by-example,multilingual bottleneck,convolutional neural networks,noise robustness,channel robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要