谷歌浏览器插件
订阅小程序
在清言上使用

Speech Tracking in Complex Auditory Scenes with Differentiated In- and Out-Field-Of-View Processing in Hearing Aids

2022 44th Annual International Conference of the IEEE Engineering in Medicine &amp Biology Society (EMBC)(2022)

引用 1|浏览4
暂无评分
摘要
In naturalistic auditory scenes, relevant information is rarely concentrated at a single location, but rather unpredictably scattered in- and out-field-of-view (in-/out-FOV). Although the parsing of a complex auditory scene is a fairly simple job for a healthy human auditory system, the uncertainty represents a major issue in the development of effective hearing aid (HA) processing strategies. Whereas traditional omnidirectional microphones (OM) amplify the complete auditory scene without enhancing signal-to-noise-ratio (SNR) between in- and out-FOV streams, directional microphones (DM) may greatly increase SNR at the cost of preventing HA users to perceive out-FOV information. The present study compares the conventional OM and DM HA settings to a split processing (SP) scheme differentiating between in- and out-FOV processing. We recorded electroencephalographic data of ten young, normal-hearing listeners who solved a cocktail-party-scenario-paradigm with continuous auditory streams and analyzed neural tracking of speech with a stimulus reconstruction (SR) approach. While results for all settings exhibited significantly higher SR accuracies for attended in-FOV than unattended out-FOV streams, there were distinct differences between settings. In-FOV SR performance was dominated by DM and SP and out-FOV SR accuracies were significantly higher for SP compared to OM and DM. Our results demonstrate the potential of a SP approach to combine the advantages of traditional OM and DM settings without introduction of significant compromises.
更多
查看译文
关键词
Hearing,Hearing Aids,Humans,Signal-To-Noise Ratio,Speech,Speech Perception
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要