Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters
INTERSPEECH, pp. 4751-4755, 2020.
We study training a single acoustic model for multiple languages with the aim of improving automatic speech recognition (ASR) performance on low-resource languages, and over-all simplifying deployment of ASR systems that support diverse languages. We perform an extensive benchmark on 51 languages, with varying amount of training data by...More
PPT (Upload PPT)