Chrome Extension
WeChat Mini Program
Use on ChatGLM

From English to More Languages: Parameter-Efficient Model Reprogramming for Cross-Lingual Speech Recognition

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

Google

Cited 17|Views175
Abstract
In this work, we propose a new parameter-efficient learning framework based on neural model reprogramming for cross-lingual speech recognition, which can re-purpose well-trained English automatic speech recognition (ASR) models to recognize the other languages. We design different auxiliary neural architectures focusing on learnable pre-trained feature enhancement that, for the first time, empowers model reprogramming on ASR. Specifically, we investigate how to select trainable components (i.e., encoder) of a conformer-based RNN-Transducer, as a frozen pre-trained backbone. Experiments on a seven-language multilingual LibriSpeech speech (MLS) task show that model reprogramming only requires 4.2 its original trainable parameters from a full ASR model to perform competitive results in a range of 11.9 addition, we discover different setups to make large-scale pre-trained ASR succeed in both monolingual and multilingual speech recognition. Our methods outperform existing ASR tuning architectures and their extension with self-supervised losses (e.g., w2v-bert) in terms of lower WER and better training efficiency.
More
Translated text
Key words
Cross-lingual speech recognition,model reprogramming,pre-trained adaptation,and foundation speech models
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本研究提出了一个基于神经模型重编程的参数高效学习框架,用于跨语言语音识别,可将训练有素的英语自动语音识别(ASR)模型重新配置以识别其他语言,创新地设计了不同的辅助神经架构,专注于可学习的前训练特征增强。

方法】:该框架通过选择可训练组件(即编码器)的冻结预训练骨干,基于 conformer-based RNN-Transducer 进行模型重编程,并研究了如何实现这一过程。

实验】:在七语言的多语言LibriSpeech语音(MLS)任务上进行实验,结果表明,模型重编程只需要其原始可训练参数的4.2%到6.8%,即可实现不同语言间的竞争性结果(11.9%到8.1%的平均WER)。此外,我们还发现了使大规模预训练ASR在单语和多语语音识别中取得成功的不同设置。我们的方法在降低WER和提高训练效率方面优于现有的ASR调整架构及其与自监督损失(例如,w2v-bert)的扩展。