A multimodal probabilistic model for gesture-based control of sound synthesis.

MM '13: ACM Multimedia Conference Barcelona Spain October, 2013(2013)

引用 54|浏览18
暂无评分
摘要
In this paper, we propose a multimodal approach to create the mapping between gesture and sound in interactive music systems. Specifically, we propose to use a multimodal HMM to conjointly model the gesture and sound parameters. Our approach is compatible with a learning method that allows users to define the gesture--sound relationships interactively. We describe an implementation of this method for the control of physical modeling sound synthesis. Our model is promising to capture expressive gesture variations while guaranteeing a consistent relationship between gesture and sound.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要