Speaker Detection and Applications to Cross-Modal Analysis of Planning Meetings

San Diego, CA(2009)

引用 0|浏览0
暂无评分
摘要
Detection of meeting events is one of the most important tasks in multimodal analysis of planning meetings. Speaker detection is a key step for extraction of most meaningful meeting events. In this paper, we present an approach of speaker localization using combination of visual and audio information in multimodal meeting analysis. When talking, people make a speech accompanying mouth movements and hand gestures. By computing correlation of audio signals, mouth movements, and hand motion, we detect a talking person both spatially and temporally. Three kinds of features are extracted for speaker localization. Hand movements are expressed by hand motion efforts; audio features are expressed by computing 12 mel-frequency cepstral coefficients from audio signals, and mouth movements are expressed by normalized cross-correlation coefficients of mouth area between two successive frames. A time delay neural network is trained to learn the correlation relationships, which is then applied to perform speaker localization. Experiments and applications in planning meeting environments are provided.
更多
查看译文
关键词
audio signal,mouth movements,motion compensation,audio signals,audio information,cross-modal analysis,audio signal analysis,time delay neural network,speaker localization,speaker detection,hand motion effort,mutimodal meeting analysis,planning,meeting event detection,speaker recognition,audio feature,hand movement,planning meetings,meeting analysis,meaningful meeting event,gesture recognition,audio signal processing,hand motion,mouth movement,hand gesture,neural nets,planning meeting,face,modal analysis,data mining,signal analysis,mel frequency cepstral coefficient,normalized cross correlation,skin,feature extraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要