Cans and cants: Computational potentials for multimodality with a case study in head position†

JOURNAL OF SOCIOLINGUISTICS(2016)

引用 12|浏览44
暂无评分
摘要
As the study of embodiment and multimodality in interaction grows in importance, there is a need for novel methodological approaches to understand how multimodal variables pattern together along social and contextual lines, and how they systematically coalesce in communicative meanings. In this work, we propose to adopt computational tools to generate replicable annotations of bodily variables: these can be examined statistically to understand their patterning with other variables across diverse speakers and interactional contexts, and can help organize qualitative analyses of large datasets. We demonstrate the possibilities thereby with a case study in head cant (side-to-side tilt of the head) in a dataset of video blogs and laboratory-collected interactions, computationally extracting cant and prosody from video and audio and analyzing their interactions, looking at gender in particular. We find that head cant indexes an orientation towards the interlocutor and a sense of shared understanding, can serve a 'bracketing' function in interaction (for speakers to create parentheticals or asides), and has gendered associations with prosodic markers and interactional discourse particles.
更多
查看译文
关键词
Embodiment,computer vision,multimodality,head cant,body positioning,prosody,gender,interaction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要