The Recognition Of Facial Expressions Of Emotion In Deaf And Hearing Individuals

HELIYON(2021)

引用 3|浏览5
暂无评分
摘要
During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.
更多
查看译文
关键词
Facial expression recognition, Dynamic versus static expression recognition, Bayesian hierarchical modelling, Psychophysics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要