Factorized Learning Assisted with Large Language Model for Gloss-free Sign Language Translation
CoRR(2024)
摘要
Previous Sign Language Translation (SLT) methods achieve superior performance
by relying on gloss annotations. However, labeling high-quality glosses is a
labor-intensive task, which limits the further development of SLT. Although
some approaches work towards gloss-free SLT through jointly training the visual
encoder and translation network, these efforts still suffer from poor
performance and inefficient use of the powerful Large Language Model (LLM).
Most seriously, we find that directly introducing LLM into SLT will lead to
insufficient learning of visual representations as LLM dominates the learning
curve. To address these problems, we propose Factorized Learning assisted with
Large Language Model (FLa-LLM) for gloss-free SLT. Concretely, we factorize the
training process into two stages. In the visual initialing stage, we employ a
lightweight translation model after the visual encoder to pre-train the visual
encoder. In the LLM fine-tuning stage, we freeze the acquired knowledge in the
visual encoder and integrate it with a pre-trained LLM to inspire the LLM's
translation potential. This factorized training strategy proves to be highly
effective as evidenced by significant improvements achieved across three SLT
datasets which are all conducted under the gloss-free setting.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要