Decorrelating Language Model Embeddings for Speech-Based Prediction of Cognitive Impairment

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

引用 0|浏览2
暂无评分
摘要
Training robust clinical speech-based models that generalize requires large sample sizes because speech is variable and high-dimensional. Researchers have turned to foundational models, such as the Bidirectional Encoder Representations from Transformers (BERT), to generate lower-dimensional embeddings, and then finetuned the models for a specific down-stream clinical task. While there is empirical evidence that this approach is helpful, a recent study reveals that the embeddings generated by BERT models tend to be highly correlated, which makes the downstream models difficult to fine-tune, particularly in the small sample size regime. In this work, we propose a new regularization scheme to penalize correlated embeddings during fine tuning of BERT and apply the approach to speech-based assessment of cognitive impairment. Compared to existing methods, the proposed method yields lower estimation errors and smaller false alarm rates in a Mini-Mental State Examination (MMSE) score regression task.
更多
查看译文
关键词
Language modeling,clinical speech analytics,decorrelated features
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要