End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames
arxiv(2023)
摘要
Recently, temporal action detection (TAD) has seen significant performance
improvement with end-to-end training. However, due to the memory bottleneck,
only models with limited scales and limited data volumes can afford end-to-end
training, which inevitably restricts TAD performance. In this paper, we reduce
the memory consumption for end-to-end training, and manage to scale up the TAD
backbone to 1 billion parameters and the input video to 1,536 frames, leading
to significant detection performance. The key to our approach lies in our
proposed temporal-informative adapter (TIA), which is a novel lightweight
module that reduces training memory. Using TIA, we free the humongous backbone
from learning to adapt to the TAD task by only updating the parameters in TIA.
TIA also leads to better TAD representation by temporally aggregating context
from adjacent frames throughout the backbone. We evaluate our model across four
representative datasets. Owing to our efficient design, we are able to train
end-to-end on VideoMAEv2-giant and achieve 75.4
first end-to-end model to outperform the best feature-based methods. Code is
available at https://github.com/sming256/AdaTAD.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要