Evaluation of FractalDB Pre-training with Vision Transformers

Seimitsu Kōgakkaishi(2023)

引用 0|浏览1
暂无评分
摘要
Since the introduction of the Vision Transformer, many transformer-based networks have been proposed. Nakashima et al. showed that ViT and gMLP can be pre-trained in FractalDB and achieve the same level of accuracy as ImageNet-1k. We hypothesize that other Transformer networks may also benefit from pre-training in FractalDB. If this hypothesis is proven, it can be expected that improving FDSL-based datasets such as FractalDB will improve the accuracy of existing networks and those to be proposed in the future. Therefore, in this paper, we perform exhaustive experiments on pre-training results of representative Transformer networks on FractalDB.
更多
查看译文
关键词
vision transformers,pre-training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要