SNP-S3: Shared Network Pre-training and Significant Semantic Strengthening for Various Video-Text Tasks
IEEE Transactions on Circuits and Systems for Video Technology(2024)
摘要
We present a framework for learning cross-modal video representations by
directly pre-training on raw data to facilitate various downstream video-text
tasks. Our main contributions lie in the pre-training framework and proxy
tasks. First, based on the shortcomings of two mainstream pixel-level
pre-training architectures (limited applications or less efficient), we propose
Shared Network Pre-training (SNP). By employing one shared BERT-type network to
refine textual and cross-modal features simultaneously, SNP is lightweight and
could support various downstream applications. Second, based on the intuition
that people always pay attention to several "significant words" when
understanding a sentence, we propose the Significant Semantic Strengthening
(S3) strategy, which includes a novel masking and matching proxy task to
promote the pre-training performance. Experiments conducted on three downstream
video-text tasks and six datasets demonstrate that, we establish a new
state-of-the-art in pixel-level video-text pre-training; we also achieve a
satisfactory balance between the pre-training efficiency and the fine-tuning
performance. The codebase are available at
https://github.com/alipay/Ant-Multi-Modal-Framework/tree/main/prj/snps3_vtp.
更多查看译文
关键词
Video-Text Pre-training,Vision and Language,Masked Language Modeling,Video-Text Matching
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要