Bridge to Non-Barrier Communication: Gloss-Prompted Fine-grained Cued Speech Gesture Generation with Diffusion Model
CoRR(2024)
摘要
Cued Speech (CS) is an advanced visual phonetic encoding system that
integrates lip reading with hand codings, enabling people with hearing
impairments to communicate efficiently. CS video generation aims to produce
specific lip and gesture movements of CS from audio or text inputs. The main
challenge is that given limited CS data, we strive to simultaneously generate
fine-grained hand and finger movements, as well as lip movements, meanwhile the
two kinds of movements need to be asynchronously aligned. Existing CS
generation methods are fragile and prone to poor performance due to
template-based statistical models and careful hand-crafted pre-processing to
fit the models. Therefore, we propose a novel Gloss-prompted Diffusion-based CS
Gesture generation framework (called GlossDiff). Specifically, to integrate
additional linguistic rules knowledge into the model. we first introduce a
bridging instruction called Gloss, which is an automatically generated
descriptive text to establish a direct and more delicate semantic connection
between spoken language and CS gestures. Moreover, we first suggest rhythm is
an important paralinguistic feature for CS to improve the communication
efficacy. Therefore, we propose a novel Audio-driven Rhythmic Module (ARM) to
learn rhythm that matches audio speech. Moreover, in this work, we design,
record, and publish the first Chinese CS dataset with four CS cuers. Extensive
experiments demonstrate that our method quantitatively and qualitatively
outperforms current state-of-the-art (SOTA) methods. We release the code and
data at https://glossdiff.github.io/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要