SEQUENCE-LEVEL SELF-TEACHING REGULARIZATION

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 0|浏览30
暂无评分
摘要
In our previous research, we proposed a frame-level self-teaching network to regularize the deep neural network during training. In this paper, we extend the previous approach and propose a sequence self-teaching network to regularize the sequence-level information in speech recognition. The idea is to generate the sequence-level soft supervision labels from the top layer of the network to supervise the training of lower layer parameters. The network is trained with an auxiliary criterion in order to reduce the sequence-level Kullback-Leibler (KL) divergence between the top layer and lower layers, where the posterior probabilities in the KL-divergence term is computed from a lattice at the sequence-level. We evaluated the sequence-level self-teaching regularization approach with bidirectional long short-term memory models on LibriSpeech task, and show consistent improvements over the discriminative sequence maximum mutual information trained baseline.
更多
查看译文
关键词
Speech recognition, sequence training, deep neural network, self-teaching, regularization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要