Self-Supervised Representations Improve End-to-End Speech Translation

Anne Wu
Anne Wu
Changhan Wang
Changhan Wang
Juan Pino
Juan Pino

INTERSPEECH, pp. 1491-1495, 2020.

Cited by: 0|Bibtex|Views40|DOI:https://doi.org/10.21437/Interspeech.2020-3094
EI
Other Links: arxiv.org|dblp.uni-trier.de|academic.microsoft.com

Abstract:

End-to-end speech-to-text translation can provide a simpler and smaller system but is facing the challenge of data scarcity. Pre-training methods can leverage unlabeled data and have been shown to be effective on data-scarce settings. In this work, we explore whether self-supervised pre-trained speech representations can benefit the spe...More

Code:

Data:

Your rating :
0

 

Tags
Comments