谷歌浏览器插件
订阅小程序
在清言上使用

Title Pending 6432

引用 1|浏览1
暂无评分
摘要
This study argues for a multimodal view of the identification, representation, and implementation of intonational structure, with evidence from gesture apex-tone coordination in Turkish. Many studies have reported consistent synchronisation of atomic prominence markers across modalities (i.e., pitch accents and gesture apexes). This is prima facie evidence that gesture and prosody are implemented together, and therefore the former can play a role in the identification and perception of the latter through apex-tone synchronisation. However, only few studies considered the full intonational context when investigating synchronisation (e.g., potential alignment of apexes with boundary tones). This is particularly relevant for Turkish as there is disagreement in the literature about whether all words in Turkish bear a pitch accent. In this study, we test the synchronisation of apexes with all intonational events in Turkish natural speech data annotated for gesture and prosody, resulting in 820 gesture apex and 3697 tonal event annotations. The study uses syllable duration (160ms) to determine synchronisation between these anchors via equivalence tests while also integrating gestural and prosodic context as factors that can affect the temporal distance between these units through mixed-effects linear regression. The findings showed that apexes were chiefly synchronised with pitch accents (71%), indicating that prominence was the primary constraint for synchronisation. However, analysis of cases with no prosodic prominence provides the first evidence for a hierarchical constraint on synchronisation, since apexes were preferentially synchronised with the tones marking prosodic words (76%) and not with the markers of prosodic constituents higher in the hierarchy. This finding supports the claim that there may be accentless words in Turkish since the absence of prominence caused a systematic shift in the synchronisation behaviour of apexes. More generally, the study shows how multimodal evidence from gesture can be used in the identification of phonological categories, and that prosodic structure is likely to be expressed through multimodal cues as a composite signal.
更多
查看译文
关键词
title
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要