BanSpeech: A Multi-Domain Bangla Speech Recognition Benchmark Toward Robust Performance in Challenging Conditions

IEEE ACCESS(2024)

引用 0|浏览0
暂无评分
摘要
Despite huge improvements in automatic speech recognition (ASR) employing neural networks, ASR systems still suffer from a lack of robustness and generalizability issues due to domain shifting. This is mainly because principal corpus design criteria are often not identified and examined adequately while compiling ASR datasets. In this study, we investigate the robustness of the fully supervised convolutional neural networks (CNNs), and the state-of-the-art transfer learning approaches, namely self-supervised wav2vec 2.0 and weakly supervised Whisper for multi-domain ASR. We also demonstrate the significance of domain selection while building a corpus by assessing these models on a novel multi-domain Bangladeshi Bangla ASR evaluation benchmark-BanSpeech, which contains approximately 6.52 hours of human-annotated speech, totaling 8085 utterances, across 13 distinct domains. SUBAK.KO, a mostly read speech corpus for the morphologically rich language Bangla, has been used to train the ASR systems. Experimental evaluation reveals that self-supervised cross-lingual pre-training with wav2vec 2.0 is the best strategy compared to weak supervision and full supervision to tackle the multi-domain ASR task. Moreover, the ASR models trained on SUBAK.KO face difficulty recognizing speech from domains with mostly spontaneous speech. The BanSpeech is publicly available to meet the need for a challenging evaluation benchmark for Bangla ASR.1
更多
查看译文
关键词
Speech recognition,Data models,Benchmark testing,Speech processing,Robustness,Solid modeling,Task analysis,Automatic speech recognition,Transfer learning,Neural networks,Convolutional neural networks,Supervised learning,Bangla,domain shifting,read speech,spontaneous speech,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要