Unconstrained Dysfluency Modeling for Dysfluent Speech Transcription and Detection
2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)(2023)
摘要
Dysfluent speech modeling requires time-accurate and silence-aware
transcription at both the word-level and phonetic-level. However, current
research in dysfluency modeling primarily focuses on either transcription or
detection, and the performance of each aspect remains limited. In this work, we
present an unconstrained dysfluency modeling (UDM) approach that addresses both
transcription and detection in an automatic and hierarchical manner. UDM
eliminates the need for extensive manual annotation by providing a
comprehensive solution. Furthermore, we introduce a simulated dysfluent dataset
called VCTK++ to enhance the capabilities of UDM in phonetic transcription. Our
experimental results demonstrate the effectiveness and robustness of our
proposed methods in both transcription and detection tasks.
更多查看译文
关键词
dysfluent speech,transcription,detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要