Evaluating Audiovisual Source Separation in the Context of Video Conferencing
INTERSPEECH(2019)
Abstract
Source separation involving mono-channel audio is a challenging problem, in particular for speech separation where source contributions overlap both in time and frequency. This task is of high interest for applications such as video conferencing. Recent progress in machine learning has shown that the combination of visual cues, coming from the video, can increase the source separation performance. Starting from a recently designed deep neural network, we assess its ability and robustness to separate the visible speakers' speech from other interfering speeches or signals. We test it for different configuration of video recordings where the speaker's face may not be fully visible. We also asses the performance of the network with respect to different sets of visual features from the speakers' faces.
MoreTranslated text
Key words
speech enhancement, source separation, multi-modal, audiovisual
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined