Enhanced Bert-Based Ranking Models for Spoken Document Retrieval

Hsiao-Yun Lin,Tien-Hong Lo,Berlin Chen

2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2019)

引用 7|浏览8
暂无评分
摘要
The Bidirectional Encoder Representations from Transformers (BERT) model has recently achieved record-breaking success on many natural language processing (NLP) tasks such as question answering and language understanding. However, relatively little work has been done on ad-hoc information retrieval (IR), especially for spoken document retrieval (SDR). This paper adopts and extends BERT for SDR, while its contributions are at least three-fold. First, we augment BERT with extra language features such as unigram and inverse document frequency (IDF) statistics to make it more applicable to SDR. Second, we also explore the incorporation of confidence scores into document representations to see if they could help alleviate the negative effects resulting from imperfect automatic speech recognition (ASR). Third, we conduct a comprehensive set of experiments to compare our BERT-based ranking methods with other state-of-the-art ones and investigate the synergy effect of them as well.
更多
查看译文
关键词
Spoken document retrieval,information retrieval,speech recognition,model augmentation,BERT
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要