A Multigrid Method for Efficiently Training Video Models

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2020)

引用 112|浏览813
暂无评分
摘要
Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training has used a fixed mini-batch shape: a specific number of clips, frames, and spatial size. However, what is the optimal shape? High resolution models perform well, but train slowly. Low resolution models train faster, but are less accurate. Inspired by multigrid methods in numerical optimization, we propose to use variable mini-batch shapes with different spatial-temporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the mini-batch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pre-training, 128 GPUs or 1 GPU). As an illustrative example, the proposed multigrid method trains a ResNet-50 SlowFast network 4.5x faster (wall-clock time, same hardware) while also improving accuracy (+0.8% absolute) on Kinetics-400 compared to baseline training. Code is available online.
更多
查看译文
关键词
multigrid method,efficiently training video models,training competitive deep video models,counterpart image models,slow training,long research cycles,video understanding research,training image models,video model training,fixed mini-batch shape,spatial size,optimal shape,high resolution models,low resolution models train,variable mini-batch shapes,spatial-temporal resolutions,training data,mini-batch size,learning rate,general grid schedule,robust grid schedule,out-of-the-box training speedup,training settings,baseline training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要