The Workshop Programme Multimodal Corpora: Models Of Human Behaviour For The Specification And Evaluation Of Multimodal Input And Output Interfaces Tuesday 25th May 2004

Emanuela Magno Caldognetto,Isabella Poggi,Federica Cavicchio,Loredana Cerrato, Björn Granström,Magnus Andreas Nordstrand

semanticscholar(2019)

引用 0|浏览0
暂无评分
摘要
We present our multimedia Visualization for Situated Temporal Analysis (VisSTA) system that facilitates analysis of multi-modal human communication incorporating video, audio, speech transcriptions, and coded multimodal (e.g. gesture and gaze) data. VisSTA is based on the Multiple Linked Representation strategy and keeps the user temporally situated by ensuring tight linkage among all representational components. The system features multiple representations, which include a hierarchical video-shot organization, a variety of animated graphs, animated multi-tier text transcripts, and an avatar representation. VisSTA is a multivideo system permitting simultaneous playing of multiple synchronized video streams that are time-locked to other data components. An integrated observation database system is included in VisSTA for storing the results of data analysis.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要