Percept, Chat, and then Adapt: Multimodal Knowledge Transfer of Foundation Models for Open-World Video Recognition
CoRR(2024)
摘要
Open-world video recognition is challenging since traditional networks are
not generalized well on complex environment variations. Alternatively,
foundation models with rich knowledge have recently shown their generalization
power. However, how to apply such knowledge has not been fully explored for
open-world video recognition. To this end, we propose a generic knowledge
transfer pipeline, which progressively exploits and integrates external
multimodal knowledge from foundation models to boost open-world video
recognition. We name it PCA, based on three stages of Percept, Chat, and Adapt.
First, we perform Percept process to reduce the video domain gap and obtain
external visual knowledge. Second, we generate rich linguistic semantics as
external textual knowledge in Chat stage. Finally, we blend external multimodal
knowledge in Adapt stage, by inserting multimodal knowledge adaptation modules
into networks. We conduct extensive experiments on three challenging open-world
video benchmarks, i.e., TinyVIRAT, ARID, and QV-Pipe. Our approach achieves
state-of-the-art performance on all three datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要