Subject-Specific Adaptation for a Causally-Trained Auditory-Attention Decoding System

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

引用 0|浏览1
暂无评分
摘要
Future hearing-aid technology may allow a listener to isolate a single talker of interest from a mixture by shifting their attention as measured by Electroencephalography (EEG). Such decoding algorithms are often trained with data from a single individual or a pool of several participants (i.e., group model). Performance in either approach is limited: group models suffer due to the variability across subjects and time, while individual models are constrained by the limited data samples available. To overcome this challenge, we introduce a subject-specific adaptive form of auditory attention decoding (AAD) over short time windows to account for the variability across EEG recording sessions. Our subject-specific augmented model, adapts a group model to an individual, significantly improving decoding accuracy by approximately 10% as compared to an individual model. This result has implications for real-time applications of neuro-steered hearing aids, where causal-training data and real-time algorithms are necessary.
更多
查看译文
关键词
Auditory attention decoding,EEG,subject-specific,time-adaptive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要